title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 6
8
| search_term
stringclasses 18
values | text
stringlengths 0
6.94M
|
---|---|---|---|---|
Genistein transfersome-embedded topical delivery system for skin melanoma treatment: | 032bef2c-6714-4c2a-886e-76e298b87641 | 11221477 | Pharmacology[mh] | Introduction Skin melanoma is considered a prime global health challenge to researchers due to its high risk of metastasizing nature, drug resistance, amplified incidences, and concomitant mortality rate. Throughout most of the 20th century, the incidence rate has increased mostly with an anticipated annual growth of 3–7%. The imperative factors allied with the development of cancer diseases are related to exposure factors (e.g. sunlight exposure), stage of disease, genetic syndromes, number of tumors, and immunosuppression (Prajapat et al., ). Because melanoma is the most malignant invasive metastasis skin cancer that is generated by melanocytes, a localized treatment strategy becomes unproductive and tedious. Though chemotherapy, radiotherapy, and immunotherapy have stayed the cornerstone in the treatment outcomes of melanoma, major drawbacks of these options are instances of adverse toxic effects and inadequate localization of chemotherapeutic agents in addition to drug resistance to current immunosuppressive checkpoint inhibitors. The stroma cell barrier existence within the immunosuppressive microenvironment of the carcinoma is the primary trigger of drug resistance in melanoma (Mason et al., ). Similarly, the traditional therapy regimen for such malignant tumors has issues with its suboptimal therapeutic efficiency owing to off-target drug transport and inefficient internalization by target receptors on tumor cells. Thus, targeted nanomedicines that can provide site-specific delivery bypass such hurdles. Patients with melanoma are benefiting from better treatment outcomes thanks in large part to nanotechnology. Nanosized drug delivery systems are colloidal substances that include: inorganic; polymeric; lipidic; or hybrid particulate materials with nanometric dimension. Proteins, peptides, antibodies, nucleic acids, radiotherapeutics, and chemotherapeutic drugs can all be effectively transported by these particles (Meteoglu and Erdemir, ). Potential phytonanomedicines are emerging sciences that affect cell signaling and are used as chemopreventive agents in cutaneous cancer treatment via enabled drug penetration and precise targeting interventions with mitigated adverse effects in case of topical application. In this regard, flavonoids are the most common polyphenols with broad treatment modalities such as antioxidant, antiviral, anti-inflammatory, antibacterial and anti-allergic effects. Numerous flavonoids hamper several signal transduction pathways that promote apoptosis, restrict angiogenesis and metastasis and limit cell proliferation (Meteoglu and Erdemir, ). Genistein (Gen) is the major isoflavone in human diets, profusely abundant in soybean foods. Numerous studies demonstrated that Gen offers great promise as chemotherapeutic agent that chiefly controls cellular events involving cell proliferation, migration, apoptotic cell death, metastasis and angiogenesis in different skin cancer types (Andrade et al., ). However, the poor aqueous solubility, rapid metabolism and excretion and unfavorable bioavailability are considered the main lacunas of the drug that restrict its overall therapeutic potency. Despite the high stability of genistein during storage, some literatures reported its possible degradation pathways. Genistein forms conjugates under acidic condition which followed by further degradation (Wang et al., ). Also, it forms conjugate with very high UV absorption in the presence of sugars (Ungar et al., ). Another possible degration pathway for genistein is its autodegradation at high temperature (60 °C) to form Maillard browning product (Davies et al., ). A variety of nanosized-based drug delivery technologies—including nanocapsules (de Zampieri et al., ), lipid nanoparticles (Andrade et al., ), nanomicelles (Yin et al., ), nanoemulsions (Back et al., ), nanocrystals (Wang et al., ), metallic nanoparticles (Ghasemi Goorbandi et al., ), liposomes (Song et al., ) and gold nanoparticles (Vodnik et al., )—have been engineered to overcome its inherent hindrances and widened the options for the treatment of several malignant diseases. Apart from their role as a nanocarrier, a novel class of flexible nanovesicles toward effectual transdermal drug delivery system such as transfersomes (Tfs) assists in their rapid penetration via the intercellular lipid pathway of the subcutaneous tissue. They consist of at least one inner aqueous compartment enclosed by a bilayer of concentric phospholipids in combination with edge activator (nonionic surfactant), which helps in solubilizing the stratum corneum (Rai et al., ; Srivastava et al., ). Additionally, it has acquired enormous demand in recent years due to well organized and depot release of low and high molecular weight drugs. Transferosomal drug delivery is the pioneer finding to encapsulate both hydrophilic and lipophilic drugs without any toxicological effect (Gayathri and Sangeetha, ). They impart numerous privileges such as evading variable GI absorption and GI incompatibility, avoiding first-pass metabolism, upgrading the bioavailability, better stability, decreasing the pace of administration and boosting patient conformity (Rai et al., ). They are metastable vesicles which build up the vesicle membrane to be extremely flexible and ultra-deformable that could be compressed to less than one-tenth of their own size without a measurable loss when applied under non-occlusive conditions (Maji et al., ). Despite these many advantages that transfersomes possessed, they have some challenges and/or limitations associated with their practical implementation. Transfersomes hydrophilic surface offered by edge activator limited its drug loading efficiency of lipophilic drugs compared to other phospholipid based nanovesicles. Also, the presence of nonionic surfactants as edge activator in transfersomes composition may result in perforation of their membrane that increases the liquification of the phospholipid bilayer (van den Bergh et al., ). Another challenge for transfersomes is the unequal hydration of different layers of stratum corneum. So, the inner parts of stratum corneum was reported to be more hydrated than the parts near the epidermis which ruined the concentration gradient that allowed continuous drug absorption. That may resulted in the development of depot effect (Jain et al., ). In addition to that, the large scale production of transfersomes is considered a very big challenge (Matharoo et al., ). Another main limitation of transfersomes is the dependency of its stability on the type of phospholipid used in their preparation. This is due to the liability of phospholipids to oxidative degradation upon exposure to air, sunlight or high temperstures (Parkash et al., ). The present study was pointed at evolving a novel topical delivery system of genistein for treatment of skin melanoma using transfersomes as a nanocarrier. Transfersomes with various ratios of the drug, phospholipids and edge activator were prepared and characterized. The optimized transferosomal formulations—F5 and F6—were selected based on the particle size, PDI, zeta potential and encapsulation efficiency data then subjected to further characterization. Transfersome evaluations include: physicochemical characterization; transmission electron microscopy (TEM); in vitro drug release and stability studies. Furthermore, the antitumor activity of the optimized vesicular formulation was comprehensively evaluated using 3D ex vivo skin melanoma tumor spheroids through cell viability and Live/Dead cell assay. Inclusively, the developed transferosomal formulation is anticipated to reap benefits in improving the therapeutic efficacy against skin melanoma. Materials and methods 2.1. Materials Genistein (4′,5,7-trihydroxyisoflavone, C15H10O5, >95%) (Gen) was obtained as a free gift from geniVida™ TG, Germany. Span 60 (Sp60) and low molecular weight chitosan (50–190 kDa) (CS), D-α-Tocopherol polyethylene glycol 1000 succinate (TPGS) were purchased from Sigma-Aldrich, St. Louis, MO. Phospholipon 90 G (PL90G) was procured from Lipoid, Germany. Carbopol 940 (CP940) was obtained as a free gift sample from Lubrizol Advanced Materials, Inc. (Cleveland, OH). Dacarbazine, Dulbecco’s Modified Eagle Medium (high glucose) (DMEM) and 10% fetal bovine serum, were purchased from Fisher Scientific (Fair Lawn, NJ). WM164 skin melanoma cell line (RRID:CVCL number-7928) was obtained as a free gift from Dr. Lawrence Pfeffer lab, Department of Pathology, University of Tennessee Health Science Center, Memphis, TN. 3D tissue culture gel (Col-Tgel, medium stiffness) was purchased from 101Bio, Mountain View, CA. Cell-counting assay kit (CCK-8) was purchased from Dojindo Molecular Technologies, Japan. Cyto3D™ Live-Dead Assay Kit was purchased from The Well Bioscience Inc. North Brunswick Township, NJ. All other chemicals used in this study were analytical reagent grade. The research in the current manuscript was conducted as a collaboration project between two labs; Dr. Amira Motawea lab (the first author) and Dr. Mohamed Ibrahim lab (the corresponding author) as follows: Studies done in Dr. Amira Motawea lab, Department of Pharmaceutics, Faculty of Pharmacy, Mansoura University, Mansoura, Egypt including; preparation of the original formulations, in vitro drug release, size characterization, pH, viscosity, XRD, DCS and FTIR. Studies done in Dr. Mohamed Ibrahim Lab, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA including; preparation of the optimized formulations, size characterization, assessment of anti-tumor activity, cell viability and 1-year stability study. 2.2. Quantification of genistein Genistein was quantified using a previously reported HPLC-UV method, with few changes. Briefly, HPLC system (KNAUER, Azura, Germany) attached to Supelco kromasil C18 column (5 μm, 100 Å, 4.0 mm × 300 mm) was used. A mixture of ammonium acetate (pH 3.37) and methanol (40:60) was used as mobile phase at a flow rate of 1.0 mL/min. The column was kept at 25 °C and the drug was detected after 6.5 min at a λ max 254 nm using photodiode array detector (Jandera et al., ; Kumar et al., ). 2.3. Preparation of genistein transfersomes Genistein transfersomes (Gen Tfs) were fabricated by thin film hydration method with some modification (Sun et al., ). Preliminary studies were accomplished to optimize our formulation. Concisely, different drug quantity (6, 8 and 10 mg Gen) was weighed and dissolved in 5 mL absolute ethyl alcohol using an ultrasonic bath (Sonix IV, USA). TPGS as the edge activator (EA), phospholipon 90 G (PL90G) and Sp60 with different molar ratios were dissolved in 5 mL chloroform followed by sonication. Subsequently, both solutions were mixed in a rounded bottom flask. The solvents were evaporated at 60 °C under reduced pressure for 10 min at 120 rpm using a rotary evaporator (Rotavapor R-300, BUCHI, Germany) to form a uniform thin lipid film. The deposited film was hydrated with 10 mL deionized water containing glycerin (2.25%w/v) for 15 min at 60 °C at 120 rpm. The obtained lipid vesicles were left to harden for 2 h at 25 °C . The colloidal dispersion was sonicated using a probe sonicator (Sonics, Newtown, CT) with amplitude of 30% (10 sec on/off cycle) for 2 min in ice bath. Gen Tfs were kept at 4 °C until further analysis. The effect of PL90G:Sp60 molar ratio and drug concentration are the critical formulation parameters that were systematically optimized at different levels for the preparation of the optimized Gen Tfs. The blank formulation was prepared using the same method without the addition of the drug. 2.4. Determination of entrapment efficiency The entrapment efficiency of Gen in the vesicular formulations was determined by the centrifugal ultrafiltration technique as previously reported (Mittal et al., ; Rassu et al., ). The vesicular formulations were placed in Amicon ultra-centrifugal filter of 10 kDa molecular weight cutoff and centrifuged (Acculab cooling centrifuge, USA) at 13,000 rpm for 1 h then analyzed for drug content by the HPLC-UV method mentioned before. The following equation was used to calculate the entrapment efficiency: %EE = Total amount of Gen − amount of free Gen Total amount of Gen × 100 2.5. Particle size, polydispersity index (PDI) and zeta potential determination Particle size and PDI of the selected vesicular formulations were analyzed by dynamic light scattering technique using a Zetasizer Pro (Malvern Instruments Ltd., UK). Samples were diluted 100-fold by Milli-Q water before the measurement. Three replicate size measurements were performed, and the average size value was calculated. For zeta potential (ZP) measurements, the same instrument was used utilizing laser Doppler Electrophoresis technique after 1000-fold dilution by Milli-Q water. Three series of five consecutive measurements were performed and an average ZP value was obtained. All determinations were performed in triplicate at 25 °C (Ferrado et al., ). 2.6. Transmission electron microscopy The morphology of Gen and blank Tfs were examined using transmission electron microscopy (TEM) (JEOL JEM1200EX II electron microscope). Briefly, Tfs formulations were diluted 1:100 with Milli-Q water. On 400 mesh copper grids covered with Formvar film (Electron Microscopy Sciences EMS, Hatfield, PA), two microliters of the diluted Tfs were placed. The grids were allowed to dry for 2 h in a desiccator followed by negative staining with Uranyless EM stain (Electron Microscopy Sciences EMS, Hatfield, PA) before examination by TEM (Ibrahim et al., ). 2.7. Physicochemical characterization of Gen Tfs Compatibility between Gen and various transfersomes components was determined by Fourier transform infrared (FTIR) in the transmission mode. FTIR spectra were recorded in KBr diffuse reflectance mode utilizing Nicolet iS10™ spectrometer (Thermo Fisher Scientific, Madison, Wisconsin). FTIR analysis for Gen, PL90G, Sp60, TPGS, the lyophilized blank and medicated F5 and F6 was performed by homogenously ground with KBr, manually compressed into disks and scanned over a frequency range of 4000 to 400 cm −1 . A blank KBr disk was used to execute a background scan. Subsequently, FTIR software for data processing (OMNIC version 8) was used to record, visualize and interpret the corresponding peaks. The average of triplicate sample measurements of the IR distinctive peaks spectra was attained (Abd El Hady et al., ). For DSC analysis, differential scanning calorimeter (Pyris 6 DSC, Perkin Elmer, USA) was used. Accurately weighed samples including Gen; PL90G; Sp60; TPGS; the lyophilized blank and medicated F5 and F6 were hermetically sealed in a crimped aluminum pan. The samples were scanned in the temperature range 30–400 °C with a heating rate of 10 °C/min. Our data analysis curves were automatically generated via Calisto treatment software. Before every run, the baseline was optimized (Abd El Hady et al., ). X-ray diffraction (XRD) patterns of Gen, PL90G, Sp60, TPGS, lyophilized blank and their corresponding formulations (F5 and F6) were recorded utilizing X-ray diffractometer (Bruker, Germany) equipped with Cu-Kα x-ray radiation. The following conditions were employed to carry out the assessment, scanning range at 2-Theta 6–50° diffraction angle, the voltage at 45 kV current at 40 mA (Ibrahim et al., ). 2.8. Preparation of genistein-loaded vesicular formulation hydrogels The optimized Gen Tfs formulations (F5 and F6) were incorporated into preprepared hydrogel for topical application. The prepared hydrogels were 2% low molecular weight chitosan (CS) or 0.5% Carbopol 940 (CP940). CS hydrogel was prepared in 1% glacial acetic acid followed by pH adjustment to 5.5, while CP940 was prepared in Milli-Q water. The concentration of chitosan and Carbopol 940 polymers was selected based on a pre-formulation study. The hydrogels were then homogenously mixed with the selected Gen Tfs formulations in a 1:1 ratio to produce Gen Tf gels. For CP940 Gen Tf hydrogel, triethanolamine was added dropwise to achieve the desired viscosity and pH of the formulation. The prepared Gen Tf hydrogels were packed in air-tight containers and stored at 4 °C for further characterization. 2.9. Homogeneity, viscosity and pH evaluations The homogeneity was observed visually for any clumps or aggregation in the prepared hydrogels. Additionally, all hydrogels were hard-pressed between the index and the thumb fingers to interpret reliability as homogeneous or non-homogeneous (Abdellatif et al., ; Insaf et al., ). The viscosity of our formulations was assessed on Discovery Hybrid Rheometer DHR-3 (Waters TA Instruments, New Castle, DE) at 25 °C using the flow rheological technique. The viscosity was determined using cone and plate attachment at shear rate 10 s −1 ( n = 3) (Maji et al., ). For determination of pH, 1 gm of the prepared Gen- and Gen Tf hydrogels was diluted to 20 gm by Milli-Q water and the pH was measured at 25 °C by pH meter (Beckman Fullerton, Germany). All measurements were performed in triplicate to confirm the accuracy and consistency and the results were expressed as mean ± SD. 2.10. Determination of the percentage drug content To ensure the homogeneity of drug our Tfs, distribution in the hydrogels, the drug content of the hydrogels was analyzed by accurately weighing 0.1 g of each hydrogel into a 10 mL volumetric flask. The gel formulations were dissolved in 1 mL DMSO and the mixtures were vortexed using vortex mixer (Fisher Scientific, Fair Lawn, NJ), then the volume was completed to 10 mL by Milli-Q water. After filtering the solution using a 0.22 µm Millipore filter, samples were assayed for the drug concentration using HPLC-UV method that was mentioned above. 2.11. In vitro drug release and kinetics studies A dialysis bag of molecular weight cutoff 10,000–12,000 Da was adopted to perform the in vitro drug release of the prepared vesicular formulations. The hydrogels of the optimized transferosomal formulation F5 and F6, the optimized vesicular formulation dispersions (F5 and F6) and Gen aqueous suspension (control) were evaluated in this study. One hundred milliliters of phosphate buffer saline pH 7.4 (PBS) containing 1% polysorbate 20, was used as a release medium at a temperature of 37° ± 0.5 °C and stirred at 50 rpm. A sample aliquot (2 mL) was withdrawn at predetermined time points and immediately replaced with fresh two milliliters PBS for 48 h (Raval et al., ). For getting insights into the mechanism of in vitro release kinetics, release data were plotted according to different release kinetics models including zero order; first order; Higuchi (Higuchi, ); Korsmeyer–Peppas (Korsmeyer et al., ) and Weibull. Each experiment was performed in triplicate and the results were calculated as mean ± SD. 2.12. 3D Cell culture ex vivo skin melanoma simulation for evaluation of the optimized Gen Tfs formulations antitumor activity 2.12.1. Formation of 3D tumor spheroids To form the 3D skin melanoma tumor spheroids, WM164 skin melanoma cell line was first allowed to grow in traditional 2D culture design in high glucose DMEM medium containing 10% fetal bovine serum and 1% penicillin/streptomycin and incubated at humidified atmosphere of 5% CO 2 at 37 °C. The cells were allowed to grow in these 2D conditions to produce the required number of cells that needed to form the 3D tumor spheroids. After reaching the required cell count, WM164 cells were then dispersed in the gelatin solution (component A of the Col-Tgel) at a concentration of 2 × 10 6 cell/mL of gelatin solution. Transglutaminase crosslinker (component B of the Col-Tgel) was mixed with the cell dispersion by repeated gentle pipetting. In a 48-well plate, 20 µL of this mixture was dropped in the middle of the well. The plate was incubated for 45 min at humidified atmosphere of 5% CO 2 at 37 °C without the addition of any culturing medium to allow the Col-Tgel to reach the required consistency. After the gel hardening and the formation of the 3D tumor spheroids, 0.5 mL of high glucose DMEM medium was added. This medium was added to cover the gel and prevent its dryness. The 3D tumor spheroids were allowed to grow for three days with daily medium changing before drug treatment (Fang et al., ; 101Bio, ). 2.12.2. Evaluation of Gen Tfs antitumor activity After three days of incubation is required for the 3D tumor spheroids growth (per the manufacturer’s protocol), treatment was started for five days (the recommended treatment period for our positive control). Different formulations were evaluated including Gen solution; Gen Tfs; blank Tfs; Dacarbazine solution (positive control)—a well-known chemotherapeutic drug that is recommended for melanoma skin cancer—as well as untreated tumor spheroids (negative control). All formulations were sterilized and used at a concentration of 0.8 mg/mL. All formulations were diluted with the culture medium and replaced two times with freshly diluted formulations during the experiment to ensure that cells got the required nutrition during the whole evaluation period. After five days of incubation with different treatments, the antitumor activity was evaluated using both cell viability and live/dead cell assays. 2.12.2.1. Cell viability test After 5 days of incubation with different treatments, cell-counting assay kit-8 (CCK-8) that contains WST-8 [2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt], was used to quantify the cell viability in the tumor spheroids due to the ability of the live cells to reduce the WST-8 tetrazolium salt to a yellow colored water soluble formazan dye. The intensity of the yellow color is directly proportional to the number of living cells. According to the manufacturer’s protocol, medium in each well was replaced by 500 µL of fresh medium containing 50 µL of stock CCK-8 reagent (Kumari et al., ). The plate was incubated for 4 h in humidified atmosphere of 5% CO 2 at 37 °C. After 4 h, 100 µL of medium was transferred into a 96-well plate and the absorbance was measured at 450 nm by a µ-Quant universal microplate spectrophotometer (Bio-Tek Instruments, Inc. Winooski, VT). Medium containing the CCK-8 reagent was served as baseline control. Cell viability was calculated as a percentage compared to the negative control (untreated cells). According to the protocol, cell viability evaluation using CCK-8 assay kit depends mainly on the ability of the live cells on the surface of the tumor spheroids to reduce the tetrazolium salt to its orange formazan derivative followed by measuring the orange color intensity in the growth medium outside the tumor spheroids. Therefore, we can deduce that CCK-8 assay kit is specifically designed to evaluate the number of viable cells on the surface of the tumor spheroids (i.e. superficial evaluation) (Fang et al., ; Laboratories, ). 2.12.2.2. Live/dead cell assay After 5 days of application for all treatments, a live/dead cell assay was performed using Cyto3D ™ Live-Dead Assay Kit. The kit contains acridine orange (cell permeable) and propidium iodide (cell non permeable) dyes. Briefly, 2 µL of Cyto3D reagent was added to each 100 µL of the total liquid volume in the well (hydrogel and medium). The plate was incubated in humidified atmosphere of 5% CO 2 at 37 °C for 30 min. The medium containing the Cyto3D reagent was removed, and tumor spheroids were washed three times with sterile PBS (Cyto3D™ Live-Dead assay kit pamphlet, ). Tumor spheroids were then subjected to imaging using confocal microscope (Zeiss LSM 980, Germany). In contrast to the cell viability assay, confocal microscopy in addition to its superficial evaluation, it demonstrates the cell viability in deeper layers inside the tumor spheroids. The dead cells images obtained from confocal microscope were quantified using ImageJ software and the percentage cell death was calculated for each tested formulation compared to the positive control (i.e. Dacarbazine solution). Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software and expressed as (mean ± SD) (GraphPad Software Inc., San Diego, CA). 2.13. Stability studies of the optimized Gen Tf formulations Stability studies of our optimized Gen Tf formulation (F6) were tested by storage at two different temperatures (5° ±1 °C and 25° ±1 °C) for 12 months. Four formulations were evaluated including F6 Gen Tf dispersion; F6 Gen Tf CP940 hydrogel; blank F6 Tfs dispersion and blank F6 Tfs CP940 hydrogel. Three independently prepared batches of each formulation were evaluated. All Tf formulations were packed in air-tight glass vials and protected from light by wrapping with aluminum foil. Tf dispersion formulations were used to test aggregation, physical appearance, particle size, PDI and zeta potential changes. Tf hydrogel formulations were used to evaluate the changes in the pH, consistency and drug contents at various time points of storage. Our Tf formulations were evaluated initially and at specified time intervals—1, 2, 3, 6, 9 and 12 months. All experiments were performed in triplicate and the results were expressed as mean ± SD. Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software (GraphPad Software Inc., San Diego, CA). Materials Genistein (4′,5,7-trihydroxyisoflavone, C15H10O5, >95%) (Gen) was obtained as a free gift from geniVida™ TG, Germany. Span 60 (Sp60) and low molecular weight chitosan (50–190 kDa) (CS), D-α-Tocopherol polyethylene glycol 1000 succinate (TPGS) were purchased from Sigma-Aldrich, St. Louis, MO. Phospholipon 90 G (PL90G) was procured from Lipoid, Germany. Carbopol 940 (CP940) was obtained as a free gift sample from Lubrizol Advanced Materials, Inc. (Cleveland, OH). Dacarbazine, Dulbecco’s Modified Eagle Medium (high glucose) (DMEM) and 10% fetal bovine serum, were purchased from Fisher Scientific (Fair Lawn, NJ). WM164 skin melanoma cell line (RRID:CVCL number-7928) was obtained as a free gift from Dr. Lawrence Pfeffer lab, Department of Pathology, University of Tennessee Health Science Center, Memphis, TN. 3D tissue culture gel (Col-Tgel, medium stiffness) was purchased from 101Bio, Mountain View, CA. Cell-counting assay kit (CCK-8) was purchased from Dojindo Molecular Technologies, Japan. Cyto3D™ Live-Dead Assay Kit was purchased from The Well Bioscience Inc. North Brunswick Township, NJ. All other chemicals used in this study were analytical reagent grade. The research in the current manuscript was conducted as a collaboration project between two labs; Dr. Amira Motawea lab (the first author) and Dr. Mohamed Ibrahim lab (the corresponding author) as follows: Studies done in Dr. Amira Motawea lab, Department of Pharmaceutics, Faculty of Pharmacy, Mansoura University, Mansoura, Egypt including; preparation of the original formulations, in vitro drug release, size characterization, pH, viscosity, XRD, DCS and FTIR. Studies done in Dr. Mohamed Ibrahim Lab, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA including; preparation of the optimized formulations, size characterization, assessment of anti-tumor activity, cell viability and 1-year stability study. Quantification of genistein Genistein was quantified using a previously reported HPLC-UV method, with few changes. Briefly, HPLC system (KNAUER, Azura, Germany) attached to Supelco kromasil C18 column (5 μm, 100 Å, 4.0 mm × 300 mm) was used. A mixture of ammonium acetate (pH 3.37) and methanol (40:60) was used as mobile phase at a flow rate of 1.0 mL/min. The column was kept at 25 °C and the drug was detected after 6.5 min at a λ max 254 nm using photodiode array detector (Jandera et al., ; Kumar et al., ). Preparation of genistein transfersomes Genistein transfersomes (Gen Tfs) were fabricated by thin film hydration method with some modification (Sun et al., ). Preliminary studies were accomplished to optimize our formulation. Concisely, different drug quantity (6, 8 and 10 mg Gen) was weighed and dissolved in 5 mL absolute ethyl alcohol using an ultrasonic bath (Sonix IV, USA). TPGS as the edge activator (EA), phospholipon 90 G (PL90G) and Sp60 with different molar ratios were dissolved in 5 mL chloroform followed by sonication. Subsequently, both solutions were mixed in a rounded bottom flask. The solvents were evaporated at 60 °C under reduced pressure for 10 min at 120 rpm using a rotary evaporator (Rotavapor R-300, BUCHI, Germany) to form a uniform thin lipid film. The deposited film was hydrated with 10 mL deionized water containing glycerin (2.25%w/v) for 15 min at 60 °C at 120 rpm. The obtained lipid vesicles were left to harden for 2 h at 25 °C . The colloidal dispersion was sonicated using a probe sonicator (Sonics, Newtown, CT) with amplitude of 30% (10 sec on/off cycle) for 2 min in ice bath. Gen Tfs were kept at 4 °C until further analysis. The effect of PL90G:Sp60 molar ratio and drug concentration are the critical formulation parameters that were systematically optimized at different levels for the preparation of the optimized Gen Tfs. The blank formulation was prepared using the same method without the addition of the drug. Determination of entrapment efficiency The entrapment efficiency of Gen in the vesicular formulations was determined by the centrifugal ultrafiltration technique as previously reported (Mittal et al., ; Rassu et al., ). The vesicular formulations were placed in Amicon ultra-centrifugal filter of 10 kDa molecular weight cutoff and centrifuged (Acculab cooling centrifuge, USA) at 13,000 rpm for 1 h then analyzed for drug content by the HPLC-UV method mentioned before. The following equation was used to calculate the entrapment efficiency: %EE = Total amount of Gen − amount of free Gen Total amount of Gen × 100 Particle size, polydispersity index (PDI) and zeta potential determination Particle size and PDI of the selected vesicular formulations were analyzed by dynamic light scattering technique using a Zetasizer Pro (Malvern Instruments Ltd., UK). Samples were diluted 100-fold by Milli-Q water before the measurement. Three replicate size measurements were performed, and the average size value was calculated. For zeta potential (ZP) measurements, the same instrument was used utilizing laser Doppler Electrophoresis technique after 1000-fold dilution by Milli-Q water. Three series of five consecutive measurements were performed and an average ZP value was obtained. All determinations were performed in triplicate at 25 °C (Ferrado et al., ). Transmission electron microscopy The morphology of Gen and blank Tfs were examined using transmission electron microscopy (TEM) (JEOL JEM1200EX II electron microscope). Briefly, Tfs formulations were diluted 1:100 with Milli-Q water. On 400 mesh copper grids covered with Formvar film (Electron Microscopy Sciences EMS, Hatfield, PA), two microliters of the diluted Tfs were placed. The grids were allowed to dry for 2 h in a desiccator followed by negative staining with Uranyless EM stain (Electron Microscopy Sciences EMS, Hatfield, PA) before examination by TEM (Ibrahim et al., ). Physicochemical characterization of Gen Tfs Compatibility between Gen and various transfersomes components was determined by Fourier transform infrared (FTIR) in the transmission mode. FTIR spectra were recorded in KBr diffuse reflectance mode utilizing Nicolet iS10™ spectrometer (Thermo Fisher Scientific, Madison, Wisconsin). FTIR analysis for Gen, PL90G, Sp60, TPGS, the lyophilized blank and medicated F5 and F6 was performed by homogenously ground with KBr, manually compressed into disks and scanned over a frequency range of 4000 to 400 cm −1 . A blank KBr disk was used to execute a background scan. Subsequently, FTIR software for data processing (OMNIC version 8) was used to record, visualize and interpret the corresponding peaks. The average of triplicate sample measurements of the IR distinctive peaks spectra was attained (Abd El Hady et al., ). For DSC analysis, differential scanning calorimeter (Pyris 6 DSC, Perkin Elmer, USA) was used. Accurately weighed samples including Gen; PL90G; Sp60; TPGS; the lyophilized blank and medicated F5 and F6 were hermetically sealed in a crimped aluminum pan. The samples were scanned in the temperature range 30–400 °C with a heating rate of 10 °C/min. Our data analysis curves were automatically generated via Calisto treatment software. Before every run, the baseline was optimized (Abd El Hady et al., ). X-ray diffraction (XRD) patterns of Gen, PL90G, Sp60, TPGS, lyophilized blank and their corresponding formulations (F5 and F6) were recorded utilizing X-ray diffractometer (Bruker, Germany) equipped with Cu-Kα x-ray radiation. The following conditions were employed to carry out the assessment, scanning range at 2-Theta 6–50° diffraction angle, the voltage at 45 kV current at 40 mA (Ibrahim et al., ). Preparation of genistein-loaded vesicular formulation hydrogels The optimized Gen Tfs formulations (F5 and F6) were incorporated into preprepared hydrogel for topical application. The prepared hydrogels were 2% low molecular weight chitosan (CS) or 0.5% Carbopol 940 (CP940). CS hydrogel was prepared in 1% glacial acetic acid followed by pH adjustment to 5.5, while CP940 was prepared in Milli-Q water. The concentration of chitosan and Carbopol 940 polymers was selected based on a pre-formulation study. The hydrogels were then homogenously mixed with the selected Gen Tfs formulations in a 1:1 ratio to produce Gen Tf gels. For CP940 Gen Tf hydrogel, triethanolamine was added dropwise to achieve the desired viscosity and pH of the formulation. The prepared Gen Tf hydrogels were packed in air-tight containers and stored at 4 °C for further characterization. Homogeneity, viscosity and pH evaluations The homogeneity was observed visually for any clumps or aggregation in the prepared hydrogels. Additionally, all hydrogels were hard-pressed between the index and the thumb fingers to interpret reliability as homogeneous or non-homogeneous (Abdellatif et al., ; Insaf et al., ). The viscosity of our formulations was assessed on Discovery Hybrid Rheometer DHR-3 (Waters TA Instruments, New Castle, DE) at 25 °C using the flow rheological technique. The viscosity was determined using cone and plate attachment at shear rate 10 s −1 ( n = 3) (Maji et al., ). For determination of pH, 1 gm of the prepared Gen- and Gen Tf hydrogels was diluted to 20 gm by Milli-Q water and the pH was measured at 25 °C by pH meter (Beckman Fullerton, Germany). All measurements were performed in triplicate to confirm the accuracy and consistency and the results were expressed as mean ± SD. Determination of the percentage drug content To ensure the homogeneity of drug our Tfs, distribution in the hydrogels, the drug content of the hydrogels was analyzed by accurately weighing 0.1 g of each hydrogel into a 10 mL volumetric flask. The gel formulations were dissolved in 1 mL DMSO and the mixtures were vortexed using vortex mixer (Fisher Scientific, Fair Lawn, NJ), then the volume was completed to 10 mL by Milli-Q water. After filtering the solution using a 0.22 µm Millipore filter, samples were assayed for the drug concentration using HPLC-UV method that was mentioned above. In vitro drug release and kinetics studies A dialysis bag of molecular weight cutoff 10,000–12,000 Da was adopted to perform the in vitro drug release of the prepared vesicular formulations. The hydrogels of the optimized transferosomal formulation F5 and F6, the optimized vesicular formulation dispersions (F5 and F6) and Gen aqueous suspension (control) were evaluated in this study. One hundred milliliters of phosphate buffer saline pH 7.4 (PBS) containing 1% polysorbate 20, was used as a release medium at a temperature of 37° ± 0.5 °C and stirred at 50 rpm. A sample aliquot (2 mL) was withdrawn at predetermined time points and immediately replaced with fresh two milliliters PBS for 48 h (Raval et al., ). For getting insights into the mechanism of in vitro release kinetics, release data were plotted according to different release kinetics models including zero order; first order; Higuchi (Higuchi, ); Korsmeyer–Peppas (Korsmeyer et al., ) and Weibull. Each experiment was performed in triplicate and the results were calculated as mean ± SD. 3D Cell culture ex vivo skin melanoma simulation for evaluation of the optimized Gen Tfs formulations antitumor activity 2.12.1. Formation of 3D tumor spheroids To form the 3D skin melanoma tumor spheroids, WM164 skin melanoma cell line was first allowed to grow in traditional 2D culture design in high glucose DMEM medium containing 10% fetal bovine serum and 1% penicillin/streptomycin and incubated at humidified atmosphere of 5% CO 2 at 37 °C. The cells were allowed to grow in these 2D conditions to produce the required number of cells that needed to form the 3D tumor spheroids. After reaching the required cell count, WM164 cells were then dispersed in the gelatin solution (component A of the Col-Tgel) at a concentration of 2 × 10 6 cell/mL of gelatin solution. Transglutaminase crosslinker (component B of the Col-Tgel) was mixed with the cell dispersion by repeated gentle pipetting. In a 48-well plate, 20 µL of this mixture was dropped in the middle of the well. The plate was incubated for 45 min at humidified atmosphere of 5% CO 2 at 37 °C without the addition of any culturing medium to allow the Col-Tgel to reach the required consistency. After the gel hardening and the formation of the 3D tumor spheroids, 0.5 mL of high glucose DMEM medium was added. This medium was added to cover the gel and prevent its dryness. The 3D tumor spheroids were allowed to grow for three days with daily medium changing before drug treatment (Fang et al., ; 101Bio, ). 2.12.2. Evaluation of Gen Tfs antitumor activity After three days of incubation is required for the 3D tumor spheroids growth (per the manufacturer’s protocol), treatment was started for five days (the recommended treatment period for our positive control). Different formulations were evaluated including Gen solution; Gen Tfs; blank Tfs; Dacarbazine solution (positive control)—a well-known chemotherapeutic drug that is recommended for melanoma skin cancer—as well as untreated tumor spheroids (negative control). All formulations were sterilized and used at a concentration of 0.8 mg/mL. All formulations were diluted with the culture medium and replaced two times with freshly diluted formulations during the experiment to ensure that cells got the required nutrition during the whole evaluation period. After five days of incubation with different treatments, the antitumor activity was evaluated using both cell viability and live/dead cell assays. 2.12.2.1. Cell viability test After 5 days of incubation with different treatments, cell-counting assay kit-8 (CCK-8) that contains WST-8 [2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt], was used to quantify the cell viability in the tumor spheroids due to the ability of the live cells to reduce the WST-8 tetrazolium salt to a yellow colored water soluble formazan dye. The intensity of the yellow color is directly proportional to the number of living cells. According to the manufacturer’s protocol, medium in each well was replaced by 500 µL of fresh medium containing 50 µL of stock CCK-8 reagent (Kumari et al., ). The plate was incubated for 4 h in humidified atmosphere of 5% CO 2 at 37 °C. After 4 h, 100 µL of medium was transferred into a 96-well plate and the absorbance was measured at 450 nm by a µ-Quant universal microplate spectrophotometer (Bio-Tek Instruments, Inc. Winooski, VT). Medium containing the CCK-8 reagent was served as baseline control. Cell viability was calculated as a percentage compared to the negative control (untreated cells). According to the protocol, cell viability evaluation using CCK-8 assay kit depends mainly on the ability of the live cells on the surface of the tumor spheroids to reduce the tetrazolium salt to its orange formazan derivative followed by measuring the orange color intensity in the growth medium outside the tumor spheroids. Therefore, we can deduce that CCK-8 assay kit is specifically designed to evaluate the number of viable cells on the surface of the tumor spheroids (i.e. superficial evaluation) (Fang et al., ; Laboratories, ). 2.12.2.2. Live/dead cell assay After 5 days of application for all treatments, a live/dead cell assay was performed using Cyto3D ™ Live-Dead Assay Kit. The kit contains acridine orange (cell permeable) and propidium iodide (cell non permeable) dyes. Briefly, 2 µL of Cyto3D reagent was added to each 100 µL of the total liquid volume in the well (hydrogel and medium). The plate was incubated in humidified atmosphere of 5% CO 2 at 37 °C for 30 min. The medium containing the Cyto3D reagent was removed, and tumor spheroids were washed three times with sterile PBS (Cyto3D™ Live-Dead assay kit pamphlet, ). Tumor spheroids were then subjected to imaging using confocal microscope (Zeiss LSM 980, Germany). In contrast to the cell viability assay, confocal microscopy in addition to its superficial evaluation, it demonstrates the cell viability in deeper layers inside the tumor spheroids. The dead cells images obtained from confocal microscope were quantified using ImageJ software and the percentage cell death was calculated for each tested formulation compared to the positive control (i.e. Dacarbazine solution). Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software and expressed as (mean ± SD) (GraphPad Software Inc., San Diego, CA). Formation of 3D tumor spheroids To form the 3D skin melanoma tumor spheroids, WM164 skin melanoma cell line was first allowed to grow in traditional 2D culture design in high glucose DMEM medium containing 10% fetal bovine serum and 1% penicillin/streptomycin and incubated at humidified atmosphere of 5% CO 2 at 37 °C. The cells were allowed to grow in these 2D conditions to produce the required number of cells that needed to form the 3D tumor spheroids. After reaching the required cell count, WM164 cells were then dispersed in the gelatin solution (component A of the Col-Tgel) at a concentration of 2 × 10 6 cell/mL of gelatin solution. Transglutaminase crosslinker (component B of the Col-Tgel) was mixed with the cell dispersion by repeated gentle pipetting. In a 48-well plate, 20 µL of this mixture was dropped in the middle of the well. The plate was incubated for 45 min at humidified atmosphere of 5% CO 2 at 37 °C without the addition of any culturing medium to allow the Col-Tgel to reach the required consistency. After the gel hardening and the formation of the 3D tumor spheroids, 0.5 mL of high glucose DMEM medium was added. This medium was added to cover the gel and prevent its dryness. The 3D tumor spheroids were allowed to grow for three days with daily medium changing before drug treatment (Fang et al., ; 101Bio, ). Evaluation of Gen Tfs antitumor activity After three days of incubation is required for the 3D tumor spheroids growth (per the manufacturer’s protocol), treatment was started for five days (the recommended treatment period for our positive control). Different formulations were evaluated including Gen solution; Gen Tfs; blank Tfs; Dacarbazine solution (positive control)—a well-known chemotherapeutic drug that is recommended for melanoma skin cancer—as well as untreated tumor spheroids (negative control). All formulations were sterilized and used at a concentration of 0.8 mg/mL. All formulations were diluted with the culture medium and replaced two times with freshly diluted formulations during the experiment to ensure that cells got the required nutrition during the whole evaluation period. After five days of incubation with different treatments, the antitumor activity was evaluated using both cell viability and live/dead cell assays. 2.12.2.1. Cell viability test After 5 days of incubation with different treatments, cell-counting assay kit-8 (CCK-8) that contains WST-8 [2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt], was used to quantify the cell viability in the tumor spheroids due to the ability of the live cells to reduce the WST-8 tetrazolium salt to a yellow colored water soluble formazan dye. The intensity of the yellow color is directly proportional to the number of living cells. According to the manufacturer’s protocol, medium in each well was replaced by 500 µL of fresh medium containing 50 µL of stock CCK-8 reagent (Kumari et al., ). The plate was incubated for 4 h in humidified atmosphere of 5% CO 2 at 37 °C. After 4 h, 100 µL of medium was transferred into a 96-well plate and the absorbance was measured at 450 nm by a µ-Quant universal microplate spectrophotometer (Bio-Tek Instruments, Inc. Winooski, VT). Medium containing the CCK-8 reagent was served as baseline control. Cell viability was calculated as a percentage compared to the negative control (untreated cells). According to the protocol, cell viability evaluation using CCK-8 assay kit depends mainly on the ability of the live cells on the surface of the tumor spheroids to reduce the tetrazolium salt to its orange formazan derivative followed by measuring the orange color intensity in the growth medium outside the tumor spheroids. Therefore, we can deduce that CCK-8 assay kit is specifically designed to evaluate the number of viable cells on the surface of the tumor spheroids (i.e. superficial evaluation) (Fang et al., ; Laboratories, ). 2.12.2.2. Live/dead cell assay After 5 days of application for all treatments, a live/dead cell assay was performed using Cyto3D ™ Live-Dead Assay Kit. The kit contains acridine orange (cell permeable) and propidium iodide (cell non permeable) dyes. Briefly, 2 µL of Cyto3D reagent was added to each 100 µL of the total liquid volume in the well (hydrogel and medium). The plate was incubated in humidified atmosphere of 5% CO 2 at 37 °C for 30 min. The medium containing the Cyto3D reagent was removed, and tumor spheroids were washed three times with sterile PBS (Cyto3D™ Live-Dead assay kit pamphlet, ). Tumor spheroids were then subjected to imaging using confocal microscope (Zeiss LSM 980, Germany). In contrast to the cell viability assay, confocal microscopy in addition to its superficial evaluation, it demonstrates the cell viability in deeper layers inside the tumor spheroids. The dead cells images obtained from confocal microscope were quantified using ImageJ software and the percentage cell death was calculated for each tested formulation compared to the positive control (i.e. Dacarbazine solution). Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software and expressed as (mean ± SD) (GraphPad Software Inc., San Diego, CA). Cell viability test After 5 days of incubation with different treatments, cell-counting assay kit-8 (CCK-8) that contains WST-8 [2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt], was used to quantify the cell viability in the tumor spheroids due to the ability of the live cells to reduce the WST-8 tetrazolium salt to a yellow colored water soluble formazan dye. The intensity of the yellow color is directly proportional to the number of living cells. According to the manufacturer’s protocol, medium in each well was replaced by 500 µL of fresh medium containing 50 µL of stock CCK-8 reagent (Kumari et al., ). The plate was incubated for 4 h in humidified atmosphere of 5% CO 2 at 37 °C. After 4 h, 100 µL of medium was transferred into a 96-well plate and the absorbance was measured at 450 nm by a µ-Quant universal microplate spectrophotometer (Bio-Tek Instruments, Inc. Winooski, VT). Medium containing the CCK-8 reagent was served as baseline control. Cell viability was calculated as a percentage compared to the negative control (untreated cells). According to the protocol, cell viability evaluation using CCK-8 assay kit depends mainly on the ability of the live cells on the surface of the tumor spheroids to reduce the tetrazolium salt to its orange formazan derivative followed by measuring the orange color intensity in the growth medium outside the tumor spheroids. Therefore, we can deduce that CCK-8 assay kit is specifically designed to evaluate the number of viable cells on the surface of the tumor spheroids (i.e. superficial evaluation) (Fang et al., ; Laboratories, ). Live/dead cell assay After 5 days of application for all treatments, a live/dead cell assay was performed using Cyto3D ™ Live-Dead Assay Kit. The kit contains acridine orange (cell permeable) and propidium iodide (cell non permeable) dyes. Briefly, 2 µL of Cyto3D reagent was added to each 100 µL of the total liquid volume in the well (hydrogel and medium). The plate was incubated in humidified atmosphere of 5% CO 2 at 37 °C for 30 min. The medium containing the Cyto3D reagent was removed, and tumor spheroids were washed three times with sterile PBS (Cyto3D™ Live-Dead assay kit pamphlet, ). Tumor spheroids were then subjected to imaging using confocal microscope (Zeiss LSM 980, Germany). In contrast to the cell viability assay, confocal microscopy in addition to its superficial evaluation, it demonstrates the cell viability in deeper layers inside the tumor spheroids. The dead cells images obtained from confocal microscope were quantified using ImageJ software and the percentage cell death was calculated for each tested formulation compared to the positive control (i.e. Dacarbazine solution). Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software and expressed as (mean ± SD) (GraphPad Software Inc., San Diego, CA). Stability studies of the optimized Gen Tf formulations Stability studies of our optimized Gen Tf formulation (F6) were tested by storage at two different temperatures (5° ±1 °C and 25° ±1 °C) for 12 months. Four formulations were evaluated including F6 Gen Tf dispersion; F6 Gen Tf CP940 hydrogel; blank F6 Tfs dispersion and blank F6 Tfs CP940 hydrogel. Three independently prepared batches of each formulation were evaluated. All Tf formulations were packed in air-tight glass vials and protected from light by wrapping with aluminum foil. Tf dispersion formulations were used to test aggregation, physical appearance, particle size, PDI and zeta potential changes. Tf hydrogel formulations were used to evaluate the changes in the pH, consistency and drug contents at various time points of storage. Our Tf formulations were evaluated initially and at specified time intervals—1, 2, 3, 6, 9 and 12 months. All experiments were performed in triplicate and the results were expressed as mean ± SD. Results were statistically analyzed by one way ANOVA followed by Tukey’s multiple comparisons test using GraphPad Prime 10 software (GraphPad Software Inc., San Diego, CA). Results and discussion Lipid-based nanocarriers and new generations of phospholipid vesicles (e.g. transfersomes) have a great influence on the following characteristics: better encapsulation of lipophilic drugs; improved safety; penetration through biological barriers; site specific targeting to tumor and hence their therapeutic performance. In the present study, the choice of lipid-based nanovesicles—transfersomes—with their penetration enhancement and structure flexibility due to the edge activators, is very promising as a topical drug delivery system (Abd-Allah et al., ). Advantageously, genistein-loaded transfersomes could permeate across the stratum corneum and drive their payload more efficiently into the deeper layers of the skin, augmenting their therapeutic efficacy owing to their ultra-deformable flexible structure (Maji et al., ). Transfersomes lipid-based nanocarriers were selected as a dosage form for genistein topical delivery due to the many advantages they possessed. Among these advantages; their flexibility and ultradeformable structure that allow them to pass through smaller pore size (1/10 of its original size) that resulted in better skin penetration enhancement (Jiang et al., ; Demartis et al., ). They could be used to incorporate a variety of drug molecules that include; hydrophilic or lipophilic drugs, small or large molecules. They are very safe as they usually prepared from biocompatible materials (Gayathri and Sangeetha, ). TPGS was selected as an edge activator in our Tfs due to its biocompatibility, its hydrophilic characteristics, its antioxidant properties that could increase the stability of the Tfs and its FDA approval (Alhakamy et al., ). Another reason for the use of TPGS in our Tfs is its P-glycoprotein inhibition ability which help in decrease drug resistance following transdermal delivery (Skazik et al., ). The developed modified transfersomes were evaluated according to their PS, PDI and ZP values, which will affect their pharmacokinetics behavior such as absorption and distribution and also their toxicity and targeting capabilities (Akl et al., ). 3.1. Entrapment efficiency, particle size, PDI and zeta potential of Gen Tfs To optimize our transferosomal formulations, the effect of the PL90G:EA (TPGS) molar ratio and Gen concentrations were investigated. Preliminary studies were performed to attain the optimal formulation by testing different molar ratios of the drug, Sp60, PL90G, and different organic solvent ratios. The effect of these factors on the PS, PDI, ZP and % EE were evaluated. The criteria of selecting our optimal Tf formulation include: the smallest PS and PDI values and the highest ZP and % EE values. represents the PS and ZP values of our optimized F6 Tfs formulation. As revealed in , all particle sizes of the prepared transfersomes are in the range of 150.40 − 266.90 nm with a small PDI value (0.19–0.33) which is indicative of a narrow particle size distribution pattern. PDI values (˂0.5) depict better distribution and are referred to as unimodal NP dispersion which is imperative for warranting the stability of a colloidal dosage form upon storage (Danaei et al., ). Here, the average ZP values of all the prepared Tf formulations are in the range of −24.93 ± 0.15 to −48.41 ± 1.25 mV. It is worth remembering that ZP values above 30 mV (either negative or positive) are requisite for the provision of very good stability in the dispersion medium and colloidal systems because the nanoparticles had enough repulsion and became less prone to form aggregates (Kamble et al., ; Maji et al., ). In addition to the stability, the high negative charge carried by our Tfs offers more biocompatibility and less biological toxicity as previously reported that negatively charged particles are less toxic than positive particles due to their lower ability to induce the production of reactive oxygen species (Lockman et al., ; Ortiz et al., ). Negative ZP of our Tfs was conferred to the use of negatively charged lipid molecules such as PL90G which carry a negatively charged phosphate groups (Allam et al., ). In addition to the colloidal system stability conferred by electric repulsion between particles, this stability was improved by the steric stability accorded by the steric shielding influence of the PEG moiety of TPGS (Luiz et al., ; Yang et al., ). As presented in , the % EE of the prepared Tf formulations vary from 85.73 ± 0.26% to 96.65 ± 0.27%. The drug entrapment efficiency increases with the increase of Gen amount from 6 to 10 mg which indicates that maximum drug encapsulation is achieved at Gen amount of 10 mg, after which excess drug is precipitated, leading to reduced %EE (data not shown). Highly lipophilic drugs have the propensity to be completely packed within the well-packed tight bilayer structure formed by Spans due to their lipophilic nature and the low hydrophilic-lipophilic balance value (HLB) of Spans (Ramkanth et al., ). Our results demonstrate that the particle size of F5 and F6 is below 170 nm with low PDI and high ZP values compared to the other formulations, manifesting the feasibility of these systems for considerable localized Gen delivery through the skin . Furthermore, TEM analyses of blank and medicated Tfs (F6) reveal the spherical outlines of Tfs with uniform size distribution and lack of aggregations that confirm the particle size results obtained from the zeta sizer. 3.2. Physicochemical characterization of Gen Tfs Physicochemical evaluations including FTIR; DSC and XRD were performed on the lyophilized optimized blank and medicated transfersome formulations (F5 and F6). All these evaluations will assess the possible interaction of Gen with other transfersome excipients and evaluate the crystallinity status of the entrapped Gen inside the Tfs to confirm their suitability for better topical application. 3.2.1. FTIR study FTIR spectroscopy is a widely nondestructive technique used for studying the possible interaction of distinctive functional groups between drugs and excipients by perceiving the changes in the characteristic wavenumber of each molecule (Malviya et al., ). As elucidated in (i), Gen shows its principal Infrared peaks at 3412 cm −1 (–O–H stretching), 3105 cm −1 (–C–H stretching), 1653 cm −1 (–C=O stretching), 1612–1424 cm −1 (C–C stretching), 1308–1253 cm −1 (–O–H bending), 1198–1043 cm −1 (C–O–C stretching) and 884–817 cm −1 (–C–H bending). (ii), illustrates the spectrum of PL90G which reveals characteristic stretching bands (C–H) at 2926 and 2855 cm −1 , a stretching band of the ester carbonyl group (C=O) at 1739 cm −1 and an ester stretching band (C–O) at 1241 cm −1 (Salama et al., ). As displayed in (iii), Sp60 presents broad band at 3420 cm −1 , sharp broad band at 2919 cm −1 , and a characteristic peak at 1739 cm −1 (Sabry et al., ). As shown in (iv), the absorption terminal –O–H group, –CH stretching, the characteristic C=O and C–O stretching bands of TPGS appear at 3448, 2891, 1742, and 1113 cm −1 , respectively (Sharma and Chauhan, ). As is evident from , the characteristic peaks of Gen are slightly shifted after drug encapsulation in Gen Tfs. Also, the FTIR spectra of both medicated and blank transfersomes retain the characteristic peaks of all the excipients without any major shifts which pursue the stability of our prepared transfersomes and the absence of any chemical interaction between its components (Hady et al., ). 3.2.2. DSC study DSC thermograms were recorded for Gen, PL90G, Sp60, TPGS, blank and medicated selected Tfs (F5 and F6) to provide data about the state of drug dispersion in the nanosystems. Furthermore, DSC helps to determine if there is a chemical interaction between the drug and transfersome components (Luiz et al., ). The thermogram of Gen demonstrates a single, well defined endothermic peak at 312.19 °C with an onset at 283.14 °C, pinpointing the melting point of this crystalline substance ( (i)) (Jangid et al., ; Komeil et al., ). The thermogram of PL90G ( (ii)) presents three broad peaks at 170.3, 220.3 and 280.5 °C, consistent with the thermal behavior of amorphous substances (Salama et al., ). The thermogram of Sp60 demonstrates an endothermic peak at 59.47 °C while TPGS shows an endothermic peak at 42.41 °C , respectively which agrees previously reported results (Elsaied et al., ; Lee et al., ). Compared to the bulk lipid, in transfersomes, either Gen-free or Gen-loaded, the melting peak corresponding to PL90G is shifted toward reduced temperatures at 272 °C . This thermal behavior demonstrates the reduced lipid crystallinity boosting the incorporation of Gen in the lipid bilayer of the formed vesicles and ascertain the interlinkage between transfersome components and hence their cohesion within the lipid bilayer. Furthermore, in medicated Tfs—F5 and F6—the characteristic peak of SP60 and TPGS are shifted to higher temperature with reduced intensity . These results coincide with previously reported data of other authors (Akl et al., ; Tsakiri et al., ). Moreover, the lack of any peak close to Gen melting point in Gen loaded Tfs that had a similar spectrum with the unloaded transfersomes suggests its physical involvement in transfersomes structure, molecular dispersion and solubilization (Jangid et al., ). 3.2.3. XRD study The crystallinity of Gen was confirmed by the XRD pattern analysis. The pure Gen displays sharp peaks at 7.45°, 12.68°, 15.94°, 17.99°, 26.29°, 27.35°, 28.64°, 29.36°, 33.54°, 35.91° and 40.56° which confirms its crystalline structure. A single diffraction peak was observed at 2θ = 7.55° and 21.56° for PL90G and Sp60, respectively . In addition, TPGS reveals two characteristic crystalline diffraction peaks at 19.28° and 23.53° (Lee et al., ). As is evident from , the pure Gen specific peaks are not perceived in our prepared transfersomes which confirms the amorphousness of the drug in the Tf formulation. The XRD results confirm the obtained results of encapsulation efficiency and DSC, indicating that transfersomes encapsulated a high amount of Gen that entrapped in a non-crystalline state inside the transfersomes. 3.3. Homogeneity, viscosity, pH and % drug content evaluations of optimized gen Tf hydrogels All the prepared hydrogels possess good homogeneity with no observed clumps or aggregations. The viscosity of our prepared Tf CS hydrogels of medicated F5 and F6 is 27.90 ± 1.21 and 28.64 ± 1.33 Pa.s, respectively. In addition, the viscosity of our Tf CP940 hydrogels is 31.14 ± 2.03 and 33.23 ± 0.89 Pa.s for medicated F5 and F6, respectively. These data reveal that all our prepared hydrogels demonstrate an optimum viscosity with suitable texture and consistency required for an excellent skin retention (Ramkanth et al., ). The pH of the topical gel should ideally be close to the skin pH to not cause any skin irritation. The medicated F5 and F6-based CS hydrogels have pH values of 6.6 ± 0.2 and 6.7 ± 0.3, respectively, while, that of CP940 hydrogels have pH values of 5.76 ± 0.54 and 5.58 ± 0.91 for medicated F5 and F6 hydrogels, respectively. According to the European Group of efficacy measurement and evaluation of cosmetics (EEMCO) guidance, the ideal pH for topical skin formulations should be in the range of 4.5 − 7.0 to be tolerable by the skin without any irritation (Parra et al., ). In addition, the percentage drug content of our prepared Tf hydrogels—F5 and F6—is 95.4 ± 1.4% and 97.9 ± 2.3%, respectively. These data are indicative of the homogeneous Gen distribution in the prepared hydrogels which confirms its suitability for topical administration. 3.4. In vitro drug release and kinetics studies of optimized Gen Tf formulations The release profiles of Gen from its aqueous dispersion (control) and the selected optimized Tfs formulations (F5 and F6) were evaluated and the results are presented in . A rapid diffusion rate is observed for Gen dispersion, with more than 50% drug released within the first 3 h and ∼−85% after 8 h. In sharp contrast, a much slower release profile of our optimized Tf formulations—F5 and F6—is observed during the whole in vitro release process. Biphasic release patterns are observed for Gen Tfs with an early burst release of ∼30% of the drug load at 3 h followed by a sustained release for more than 48 h. The early burst release is mainly assigned to the drug adsorbed on the surface and poorly entrapped inside the Tfs core, which is easily diffused into the release medium (Manickavasagam et al., ). Another reason for this rapid onset release is the small particle size of the Tfs that resulted in larger surface to volume ratio and higher release rate for drug located near to Tfs surface (Danaei et al., ). However, the following sustained release phase is principally ascribed to the erosion of the transfersomes and then the diffusion of the well-encapsulated Gen (Xiao et al., ). The solid matrix of the Tfs is also responsible for drug immobilization which supports the slow release rate of Gen. Impressively, the biphasic drug release pattern is expected to enhance the therapeutic proficiency of Gen. Because the poorly entrapped drug on the Tf surface will be ready to be absorbed and then will produce immediate response upon application. However, the well-encapsulated Gen portion will preserve the continuous drug response for an extended time period (i.e. rapid onset and long duration of action) (Malviya et al., ). As illustrated in , the drug release profiles of both the free Gen and our prepared Tf dispersion were best fitted with the Weibull model, which has the highest correlation coefficient values of 0.958, 0.936 and 0.868 for the free Gen, F5 and F6 dispersions, respectively. In Weibull model’s, β value is used to describe the drug release mechanism (Maji et al., ). Fickian diffusion is anticipated if the β value is ≤ 0.75 while 0.75 < β < 1 is indicative of a combined Fickian diffusion and Case II transport mechanism. Nonetheless, for β > 1, the drug release follows a complex release mechanism. Our data show β values of 0.624, 0.421 and 0.337 for the free Gen, F5 and F6 dispersions, respectively which demonstrates that Gen release follows a Fickian diffusion mechanism (Corsaro et al., ). Drug release profiles of the selected transferosomal formulations F5 and F6 in either CS or CP940 hydrogels compared to the corresponding Gen gel are presented in . For chitosan gels, as is evident from Gen-loaded CS gel shows faster release rate with 93.10 ± 3.78% drug release after 48 h compared to F5 and F6 Tf CS hydrogels, which show 47.6 ± 0.9% and 47.45 ± 1.38% drug release, respectively after 48 h. Similar results are observed with CP940-based hydrogels. Gen-loaded CP940 gel illustrates rapid Gen release of 96.9 ± 3.66% after 48 h compared to F5 and F6 Tf CP940 hydrogels which present 64.9 ± 3.59% and 61.1 ± 0.66% drug release, respectively after 48 h . These results demonstrate that the hydrogel-based transferosomal formulations exhibit highly controlled drug release rate compared to the formulations that lack Tfs. This observation is likely attributed to two main reasons: the first is the solid matrix of the transferosomal gel which is responsible for drug immobilization and subsequently slower drug release than the Gen hydrogel (Maji et al., ). The second reason for that sustained drug release of Gen Tf hydrogels, is the extra step included in the drug release mechanism. To be released, Gen has to first diffuse from the Tfs to get out of its vesicular structure then it has to diffuse through the viscous gel matrices to reach the release membrane. This extra step greatly extends its release from Tf hydrogels. The release kinetics of Gen from different formulations were evaluated in vitro to determine the efficacy of the transfersomes as a prospective topical drug delivery system. The drug release data obtained from the optimized Tf formulations was evaluated using various release kinetics models such as: zero order; first order; Higuchi; Weibull and Korsmeyer-Peppas and the results were listed in . In case of chitosan hydrogels, relying on the determination coefficients values, Weibull model is determined to be the best fit for the release data. The calculated β values are ≤ 0.75 (0.624, 0.421 and 0.337 for the Gen-loaded CS hydrogel, Gen-loaded F5 and F6 Tf CS hydrogels, respectively), demonstrating a Fickian diffusion-controlled drug release mechanism (Sharma et al., ). Similar results are observed for CP940 formulations where the best fit model is Weibull model except for F6 Tf CP940 hydrogel which determined to follow the Korsmeyer-Peppas model with n value of 0.214 that presents a Fickian (≤0.5, non-steady) diffusion-controlled release mechanism (Hady et al., ). 3.5. 3D Cell culture ex vivo simulation of skin melanoma for evaluation of the optimized Gen Tf formulations antitumor activity WM164 skin melanoma cells were allowed to grow in 3D spheroids designed to mimic the complex skin tumor microenvironment using medium stiffness Col-Tgel. Previous studies demonstrated that Col-Tgel is appropriate for in vitro tumor spheroids bioengineering because it resembles the complexity of the tumor matrix. The degree of stiffness of the Col-Tgel depends mainly on the type of the studied tumor (Fang et al., ). In the current study, medium stiffness Col-Tgel was selected to match the stiffness of skin melanoma (101Bio, ). After seeding the Col-Tgel with WM164 cells, the spheroids were allowed to grow for 3 days before any treatment because of the expected prolonged doubling time and the slower metabolic rate within the tumor spheroids microenvironment (Fang et al., ). After the formation of well-developed WM164 tumor spheroids, Gen antitumor activity was evaluated using two different techniques (cell viability assay and live/dead cell assay). Cell viability assay is used to evaluate the cell viability on the tumor surface because reagent color change is only measured in the medium surrounding the tumor spheroids. However, live/dead cell assay is used mainly to evaluate the antitumor activity inside the tumor mass and determine the ability of the formulation to penetrate into deeper area inside the solid tumor. 3.5.1. Cell viability describes the percentage cell viability quantification results of the tested formulations using CCK-8 assay kit. The results show that the blank F6 Tfs has a comparable percentage cell viability result (100.79 ± 4.90%) to that of the untreated tumor spheroids (medium only, 100% cell viability) with no significant statistically difference ( p = .9997). These results confirm the safety and biocompatibility of our transfersome ingredients. On the other hand, comparing the percentage cell viability of Gen F6 Tfs (13.33 ± 1.044%), Dacarbazine solution (5.79 ± 0.025%) and Gen solution (5.65 ± 0.115%) with the untreated tumor spheroids (100 ± 4.71%), our results show very high significant differences ( p < .0001), which demonstrate the antitumor activities of these formulations. Our results also illustrate that there is no statistically significant difference in the percentage cell viability between Dacarbazine and Gen solutions ( p = .9999). These data demonstrate that Gen has a comparable antitumor activity to that of the positive control. In contrast, there are significantly lower percentage cell viability was detected upon comparing Gen Tfs with Dacarbazine solution ( p = .0018) and Gen solution ( p = .0011). The lower efficacy of Gen Tfs compared to Dacarbazine and Gen solutions is expected to be mainly due to two factors: the first is the direct effect of the drug available in solution on the cells located superficially on the surface of the tumor spheroids which is higher in case of drug solutions than in case of transfersomes. The second factor is the encapsulation of Gen inside the lipid structure of the transfersomes that requires longer time to be released and exert its antitumor effect. Depending on these two factors, we can deduce that, because the cell viability study is designed to evaluate the number of viable cells on the surface of the tumor spheroids, we expect that the drug solutions will have higher antitumor activity than transfersomes because of the availability of high amount of free drug molecules in the solutions. 3.5.2. Live/dead cell assay illustrates the confocal microscope images for the tumor spheroids treated with different formulations with the dead cell column (stained red) and the total cell column (stained green). demonstrates the quantitative evaluation of the percentage cell death (antitumor effect) of different formulations. The results show that the blank F6 Tfs has a comparable percentage cell death count to that of the untreated tumor spheroids with no significant statistical difference ( p = .849). These results confirm the safety and biocompatibility of our transfersome ingredients and agreed with the percentage cell viability results. On the other hand, comparing the percentage cell death of Gen F6 Tfs (117.6 ± 4.78%), Dacarbazine solution (100 ± 2.46%) and Gen solution (84.1 ± 12.12%) with the untreated tumor spheroids show high significant differences ( p < .001), which demonstrate the antitumor activities of these formulations. Also, the results show that there is no statistically significant difference in the cell death percentage between Dacarbazine and Gen solutions which confirms that Gen has a comparable antitumor activity compared to the positive control ( p = .589). In contrast to the cell viability test results, the results of the cell death present a significantly higher antitumor effect of Gen Tfs compared to both Dacarbazine and Gen solutions ( p < .05). This higher efficacy of Gen Tfs compared to the drug solution is likely due to the synergetic effect of the Tfs components and the drug. Among these components is the edge activator (TPGS) which provides Tfs with its hydrophilic, ultra-deformable and flexible structure that provides it with high penetration ability through deeper area inside the tumor spheroid core. Other Tfs components like the phospholipid (Hmingthansanga et al., ) and span 60 (Mady et al., ) are known to be permeability enhancers which also could provide a synergetic effect. The higher antitumor efficacy of Gen Tfs may be also attributed to the small particle size of the Tfs that enables for higher penetration through the tumor and longer retension inside it, which resulted in better efficacy of the chemotherapeutic agent (Maeda, ; Caracciolo, ). Another reason for the higher efficacy of Gen Tfs, is its negative zeta potential value that enables better penetration and entrapment into the tumor tissues (Miatmoko et al., ). Several previous researches proved the penetration enhancing ability of transfersomes through skin melanoma tissue due to its flexible and ultra-deformable structure which enable them to compress themselves to less than one-tenth of their own size to easily pass though the tissues (Jiang et al., ; Demartis et al., ). The TPGS incorporated in our transfersomes acts as an edge activator that provided Tfs with these flexible and ultra-deformable behavior that allows them to easily penetrate into deeper area in the tumor spheriods. In conclusion, our data demonstrate that Gen Tfs is a promising drug delivery system for the treatment of skin melanoma (Maji et al., ). 3.6. Stability studies of the optimized Gen Tf formulations Regarding the physical stability of Tfs dispersion after storage for 12 months at different temperatures, there were no observable signs of physical changes in both medicated and blank Tfs dispersions such as precipitation or aggregation. The improved physical stability of transfersomes drug delivery system may be due to the the increased elasticity and surface hydrophilicity as a result of the presence of edge activator. This enhanced elasticity and surface hydrophilicity prevent agglomeration and coalescence due to osmotic stress. This unique characteristic of transfersomes make them more advantageous than the traditional liposomes (Cevc, ; Matharoo et al., ). The particle size data of both medicated and blank Tfs are presented in and , respectively. The results demonstrate that our Tf formulations possess an excellent particle size stability with no significant changes in the particle size of the medicated Tfs after 1 year of storage at different temperatures ( p > .05). The average particle size of the fresh Gen Tfs was 168.867 ± 9.774 nm and after 12 months, the particle size was 164.278 ± 4.110 nm and 199.811 ± 14.539 nm for medicated formulations at 5° and 25 °C, respectively . Regarding the blank Tfs, the average particle size of the fresh sample was 160.392 ± 2.710 nm with no significant changes after 1 year of storage at 5 °C ( p > .05). In addition, at 25 °C there were no significant changes in the particle size of the blank Tfs during the 1st 6 months ( p > .05). However, after 6 months of storage there was a significant increase in the blank Tfs particle size ( p = .0090), which is likely due to particle aggregation as a result of long standing at higher temperature (Omar et al., ) as illustrated in . Despite the significant increase in the particle size of the blank Tfs after 6 months of storage at 25 °C, the particle size is still in the acceptable range for efficient delivery of the encapsulated drug into deeper layers of the skin (Verma et al., ). Also, it is reported that a particle size below 600 nm is essential to deliver the encapsulated drug into different layers of skin tumor (Danaei et al., ). Regarding the PDI values of Tfs demonstrated in and , although there were significant changes especially for the medicated Tfs at some time points, all the measured PDI values are still in the acceptable range for phospholipid-based vesicles (< 0.4) (Putri et al., ). It is known that PDI is an indicator of the quality of particle size distribution. It is clearly observed from our results ( and ) that our Tfs has a uniform and narrow particle size distribution which is indicative of a good physical stability and perfect performance at the tumor application site (Danaei et al., ). Concerning the zeta potential values of the prepared Gen Tfs that illustrated in and , the fresh Gen Tfs and blank Tfs samples possess high negative zeta potential values of −48.408 ± 1.253 mV and −58.356 ± 1.363 mV, respectively. After 12 months of storage at 5 °C, there were no significant changes in the measured zeta potential values of both medicated (−47.953 ± 0.903 mV) and blank Tfs (−55.547 ± 0.750 mV) ( p > .05). Similarly at 25 °C, there were no significant changes in the measured zeta potential values for the first 3 months for medicated Tfs and for the first 6 months for blank Tfs ( p > .05). However, there was an observed increase in the measured zeta potential negativity values at 25 °C after 6 months for medicated Tfs ( p = .0213) and after 9 months ( p = .0081) for blank Tfs as demonstrated in and , respectively. This observed increase in the negativity of the Tfs stored at 25 °C is likely due to the hydrolysis of the lipid contents of the prepared Tfs with the exposure of the negatively charged free carboxylic acid groups at higher temperature (Tamilvanan et al., ). Advantageously, this increase in the measured zeta potential negativity will provide more physical stability to the Tf formulations. This increase in the zeta potential values will also help to keep a smaller particle size and narrow particle size distribution (PDI) due to the increased and continuous repulsion between adjacent particles in the dispersion (Izak-Nau et al., ). Regarding the physical stability of Tf CP940 hydrogels after 12 months of storage at different temperatures, there were no observed changes in the hydrogel physical appearance or consistency at 5 °C for both medicated and blank Tf hydrogels. Similarly, there were no observed changes in the hydrogel physical appearance or consistency at 25 °C for both medicated and blank Tf hydrogels during the 1st nine months of the stability study. After nine months, there was an observed decrease in the hydrogel consistency for both medicated and blank Tf hydrogels. This change in consistency is likely due to the hydrolysis of the lipid contents of the prepared Tfs with the liberation of free fatty acids at higher temperature. These free fatty acids resulted in a decrease in the pH value of the CP940 hydrogels below its pKa value of about 5.5 that resulted in its conversion from gel to sol state as a pH triggered in situ gelling polymer (Gupta and Vyas, ). This change in the pH values of the hydrogels during the long term storage at 25 °C could be mitigated by preparing the hydrogels in a suitable biocompatible buffering system. The changes in the pH values of the prepared Tf hydrogels during storage for 12 months at different temperatures were illustrated in and . Initially, the pH values were 5.622 ± 0.036 and 5.608 ± 0.021 for medicated and blank Tf hydrogels, respectively. No significant changes were observed in the pH values of both medicated (5.578 ± 0.083) and blank (5.702 ± 0.031) formulations after storage for 12 months at 5 °C ( p > .05). Similarly, at 25 °C, there were no significant changes in the pH values for up to 3 months of storage for the medicated hydrogel and for up to 6 months for the blank hydrogel ( p > .05). In contrast, at the end of the study, there was an observed decrease in the pH values of the formulations stored at 25 °C for medicated (5.057 ± 0.077, p = .006) and blank (4.796 ± 0.039, p = .0002) hydrogels, respectively ( and ). This observed decrease in the pH values at elevated temperature is likely due to the release of free fatty acids as a result of hydrolysis and/or oxidative degradation of the transfersomes phospholipid components at higher temperature (Tamilvanan et al., ). Although there was a decrease in the measured pH values of the formulations stored at 25 °C for 1 year, according to the EEMCO guidelines, the formulations still have ideal pH values for semi-solid formulation intended for topical use (pH 4.5–7) (Novi et al., ; Parra et al., ; Lukić et al., ). The chemical stability of the medicated CP940 Tf hydrogel, expressed as drug content, is illustrated in . As is evident from our results, there were no significant changes in the formulations drug contents at any time point during the 1 year of storage regardless the storage temperature ( p > .05). This improved chemical stability of genistein may be due to its incorporation into this lipid based drug delivery system which provide additional protection for the incorporated active ingredient against possible degradation by oxidation, light and temperature (Matharoo et al., ). This data obviously indicated the very high chemical stability of genistein in the tested formulations at temperatures up to 25 °C, which recommends its use as a promising topical drug delivery system for the treatment of skin melanoma that could be stored in ambient temperature without any specific storage conditions. But as a more precautionary measure, we recommended the storage of the Tf hydrogels under refrigeration conditions to preserve the integrity of its lipids components and mitigate pH changes at ambient conditions. Entrapment efficiency, particle size, PDI and zeta potential of Gen Tfs To optimize our transferosomal formulations, the effect of the PL90G:EA (TPGS) molar ratio and Gen concentrations were investigated. Preliminary studies were performed to attain the optimal formulation by testing different molar ratios of the drug, Sp60, PL90G, and different organic solvent ratios. The effect of these factors on the PS, PDI, ZP and % EE were evaluated. The criteria of selecting our optimal Tf formulation include: the smallest PS and PDI values and the highest ZP and % EE values. represents the PS and ZP values of our optimized F6 Tfs formulation. As revealed in , all particle sizes of the prepared transfersomes are in the range of 150.40 − 266.90 nm with a small PDI value (0.19–0.33) which is indicative of a narrow particle size distribution pattern. PDI values (˂0.5) depict better distribution and are referred to as unimodal NP dispersion which is imperative for warranting the stability of a colloidal dosage form upon storage (Danaei et al., ). Here, the average ZP values of all the prepared Tf formulations are in the range of −24.93 ± 0.15 to −48.41 ± 1.25 mV. It is worth remembering that ZP values above 30 mV (either negative or positive) are requisite for the provision of very good stability in the dispersion medium and colloidal systems because the nanoparticles had enough repulsion and became less prone to form aggregates (Kamble et al., ; Maji et al., ). In addition to the stability, the high negative charge carried by our Tfs offers more biocompatibility and less biological toxicity as previously reported that negatively charged particles are less toxic than positive particles due to their lower ability to induce the production of reactive oxygen species (Lockman et al., ; Ortiz et al., ). Negative ZP of our Tfs was conferred to the use of negatively charged lipid molecules such as PL90G which carry a negatively charged phosphate groups (Allam et al., ). In addition to the colloidal system stability conferred by electric repulsion between particles, this stability was improved by the steric stability accorded by the steric shielding influence of the PEG moiety of TPGS (Luiz et al., ; Yang et al., ). As presented in , the % EE of the prepared Tf formulations vary from 85.73 ± 0.26% to 96.65 ± 0.27%. The drug entrapment efficiency increases with the increase of Gen amount from 6 to 10 mg which indicates that maximum drug encapsulation is achieved at Gen amount of 10 mg, after which excess drug is precipitated, leading to reduced %EE (data not shown). Highly lipophilic drugs have the propensity to be completely packed within the well-packed tight bilayer structure formed by Spans due to their lipophilic nature and the low hydrophilic-lipophilic balance value (HLB) of Spans (Ramkanth et al., ). Our results demonstrate that the particle size of F5 and F6 is below 170 nm with low PDI and high ZP values compared to the other formulations, manifesting the feasibility of these systems for considerable localized Gen delivery through the skin . Furthermore, TEM analyses of blank and medicated Tfs (F6) reveal the spherical outlines of Tfs with uniform size distribution and lack of aggregations that confirm the particle size results obtained from the zeta sizer. Physicochemical characterization of Gen Tfs Physicochemical evaluations including FTIR; DSC and XRD were performed on the lyophilized optimized blank and medicated transfersome formulations (F5 and F6). All these evaluations will assess the possible interaction of Gen with other transfersome excipients and evaluate the crystallinity status of the entrapped Gen inside the Tfs to confirm their suitability for better topical application. 3.2.1. FTIR study FTIR spectroscopy is a widely nondestructive technique used for studying the possible interaction of distinctive functional groups between drugs and excipients by perceiving the changes in the characteristic wavenumber of each molecule (Malviya et al., ). As elucidated in (i), Gen shows its principal Infrared peaks at 3412 cm −1 (–O–H stretching), 3105 cm −1 (–C–H stretching), 1653 cm −1 (–C=O stretching), 1612–1424 cm −1 (C–C stretching), 1308–1253 cm −1 (–O–H bending), 1198–1043 cm −1 (C–O–C stretching) and 884–817 cm −1 (–C–H bending). (ii), illustrates the spectrum of PL90G which reveals characteristic stretching bands (C–H) at 2926 and 2855 cm −1 , a stretching band of the ester carbonyl group (C=O) at 1739 cm −1 and an ester stretching band (C–O) at 1241 cm −1 (Salama et al., ). As displayed in (iii), Sp60 presents broad band at 3420 cm −1 , sharp broad band at 2919 cm −1 , and a characteristic peak at 1739 cm −1 (Sabry et al., ). As shown in (iv), the absorption terminal –O–H group, –CH stretching, the characteristic C=O and C–O stretching bands of TPGS appear at 3448, 2891, 1742, and 1113 cm −1 , respectively (Sharma and Chauhan, ). As is evident from , the characteristic peaks of Gen are slightly shifted after drug encapsulation in Gen Tfs. Also, the FTIR spectra of both medicated and blank transfersomes retain the characteristic peaks of all the excipients without any major shifts which pursue the stability of our prepared transfersomes and the absence of any chemical interaction between its components (Hady et al., ). 3.2.2. DSC study DSC thermograms were recorded for Gen, PL90G, Sp60, TPGS, blank and medicated selected Tfs (F5 and F6) to provide data about the state of drug dispersion in the nanosystems. Furthermore, DSC helps to determine if there is a chemical interaction between the drug and transfersome components (Luiz et al., ). The thermogram of Gen demonstrates a single, well defined endothermic peak at 312.19 °C with an onset at 283.14 °C, pinpointing the melting point of this crystalline substance ( (i)) (Jangid et al., ; Komeil et al., ). The thermogram of PL90G ( (ii)) presents three broad peaks at 170.3, 220.3 and 280.5 °C, consistent with the thermal behavior of amorphous substances (Salama et al., ). The thermogram of Sp60 demonstrates an endothermic peak at 59.47 °C while TPGS shows an endothermic peak at 42.41 °C , respectively which agrees previously reported results (Elsaied et al., ; Lee et al., ). Compared to the bulk lipid, in transfersomes, either Gen-free or Gen-loaded, the melting peak corresponding to PL90G is shifted toward reduced temperatures at 272 °C . This thermal behavior demonstrates the reduced lipid crystallinity boosting the incorporation of Gen in the lipid bilayer of the formed vesicles and ascertain the interlinkage between transfersome components and hence their cohesion within the lipid bilayer. Furthermore, in medicated Tfs—F5 and F6—the characteristic peak of SP60 and TPGS are shifted to higher temperature with reduced intensity . These results coincide with previously reported data of other authors (Akl et al., ; Tsakiri et al., ). Moreover, the lack of any peak close to Gen melting point in Gen loaded Tfs that had a similar spectrum with the unloaded transfersomes suggests its physical involvement in transfersomes structure, molecular dispersion and solubilization (Jangid et al., ). 3.2.3. XRD study The crystallinity of Gen was confirmed by the XRD pattern analysis. The pure Gen displays sharp peaks at 7.45°, 12.68°, 15.94°, 17.99°, 26.29°, 27.35°, 28.64°, 29.36°, 33.54°, 35.91° and 40.56° which confirms its crystalline structure. A single diffraction peak was observed at 2θ = 7.55° and 21.56° for PL90G and Sp60, respectively . In addition, TPGS reveals two characteristic crystalline diffraction peaks at 19.28° and 23.53° (Lee et al., ). As is evident from , the pure Gen specific peaks are not perceived in our prepared transfersomes which confirms the amorphousness of the drug in the Tf formulation. The XRD results confirm the obtained results of encapsulation efficiency and DSC, indicating that transfersomes encapsulated a high amount of Gen that entrapped in a non-crystalline state inside the transfersomes. FTIR study FTIR spectroscopy is a widely nondestructive technique used for studying the possible interaction of distinctive functional groups between drugs and excipients by perceiving the changes in the characteristic wavenumber of each molecule (Malviya et al., ). As elucidated in (i), Gen shows its principal Infrared peaks at 3412 cm −1 (–O–H stretching), 3105 cm −1 (–C–H stretching), 1653 cm −1 (–C=O stretching), 1612–1424 cm −1 (C–C stretching), 1308–1253 cm −1 (–O–H bending), 1198–1043 cm −1 (C–O–C stretching) and 884–817 cm −1 (–C–H bending). (ii), illustrates the spectrum of PL90G which reveals characteristic stretching bands (C–H) at 2926 and 2855 cm −1 , a stretching band of the ester carbonyl group (C=O) at 1739 cm −1 and an ester stretching band (C–O) at 1241 cm −1 (Salama et al., ). As displayed in (iii), Sp60 presents broad band at 3420 cm −1 , sharp broad band at 2919 cm −1 , and a characteristic peak at 1739 cm −1 (Sabry et al., ). As shown in (iv), the absorption terminal –O–H group, –CH stretching, the characteristic C=O and C–O stretching bands of TPGS appear at 3448, 2891, 1742, and 1113 cm −1 , respectively (Sharma and Chauhan, ). As is evident from , the characteristic peaks of Gen are slightly shifted after drug encapsulation in Gen Tfs. Also, the FTIR spectra of both medicated and blank transfersomes retain the characteristic peaks of all the excipients without any major shifts which pursue the stability of our prepared transfersomes and the absence of any chemical interaction between its components (Hady et al., ). DSC study DSC thermograms were recorded for Gen, PL90G, Sp60, TPGS, blank and medicated selected Tfs (F5 and F6) to provide data about the state of drug dispersion in the nanosystems. Furthermore, DSC helps to determine if there is a chemical interaction between the drug and transfersome components (Luiz et al., ). The thermogram of Gen demonstrates a single, well defined endothermic peak at 312.19 °C with an onset at 283.14 °C, pinpointing the melting point of this crystalline substance ( (i)) (Jangid et al., ; Komeil et al., ). The thermogram of PL90G ( (ii)) presents three broad peaks at 170.3, 220.3 and 280.5 °C, consistent with the thermal behavior of amorphous substances (Salama et al., ). The thermogram of Sp60 demonstrates an endothermic peak at 59.47 °C while TPGS shows an endothermic peak at 42.41 °C , respectively which agrees previously reported results (Elsaied et al., ; Lee et al., ). Compared to the bulk lipid, in transfersomes, either Gen-free or Gen-loaded, the melting peak corresponding to PL90G is shifted toward reduced temperatures at 272 °C . This thermal behavior demonstrates the reduced lipid crystallinity boosting the incorporation of Gen in the lipid bilayer of the formed vesicles and ascertain the interlinkage between transfersome components and hence their cohesion within the lipid bilayer. Furthermore, in medicated Tfs—F5 and F6—the characteristic peak of SP60 and TPGS are shifted to higher temperature with reduced intensity . These results coincide with previously reported data of other authors (Akl et al., ; Tsakiri et al., ). Moreover, the lack of any peak close to Gen melting point in Gen loaded Tfs that had a similar spectrum with the unloaded transfersomes suggests its physical involvement in transfersomes structure, molecular dispersion and solubilization (Jangid et al., ). XRD study The crystallinity of Gen was confirmed by the XRD pattern analysis. The pure Gen displays sharp peaks at 7.45°, 12.68°, 15.94°, 17.99°, 26.29°, 27.35°, 28.64°, 29.36°, 33.54°, 35.91° and 40.56° which confirms its crystalline structure. A single diffraction peak was observed at 2θ = 7.55° and 21.56° for PL90G and Sp60, respectively . In addition, TPGS reveals two characteristic crystalline diffraction peaks at 19.28° and 23.53° (Lee et al., ). As is evident from , the pure Gen specific peaks are not perceived in our prepared transfersomes which confirms the amorphousness of the drug in the Tf formulation. The XRD results confirm the obtained results of encapsulation efficiency and DSC, indicating that transfersomes encapsulated a high amount of Gen that entrapped in a non-crystalline state inside the transfersomes. Homogeneity, viscosity, pH and % drug content evaluations of optimized gen Tf hydrogels All the prepared hydrogels possess good homogeneity with no observed clumps or aggregations. The viscosity of our prepared Tf CS hydrogels of medicated F5 and F6 is 27.90 ± 1.21 and 28.64 ± 1.33 Pa.s, respectively. In addition, the viscosity of our Tf CP940 hydrogels is 31.14 ± 2.03 and 33.23 ± 0.89 Pa.s for medicated F5 and F6, respectively. These data reveal that all our prepared hydrogels demonstrate an optimum viscosity with suitable texture and consistency required for an excellent skin retention (Ramkanth et al., ). The pH of the topical gel should ideally be close to the skin pH to not cause any skin irritation. The medicated F5 and F6-based CS hydrogels have pH values of 6.6 ± 0.2 and 6.7 ± 0.3, respectively, while, that of CP940 hydrogels have pH values of 5.76 ± 0.54 and 5.58 ± 0.91 for medicated F5 and F6 hydrogels, respectively. According to the European Group of efficacy measurement and evaluation of cosmetics (EEMCO) guidance, the ideal pH for topical skin formulations should be in the range of 4.5 − 7.0 to be tolerable by the skin without any irritation (Parra et al., ). In addition, the percentage drug content of our prepared Tf hydrogels—F5 and F6—is 95.4 ± 1.4% and 97.9 ± 2.3%, respectively. These data are indicative of the homogeneous Gen distribution in the prepared hydrogels which confirms its suitability for topical administration. In vitro drug release and kinetics studies of optimized Gen Tf formulations The release profiles of Gen from its aqueous dispersion (control) and the selected optimized Tfs formulations (F5 and F6) were evaluated and the results are presented in . A rapid diffusion rate is observed for Gen dispersion, with more than 50% drug released within the first 3 h and ∼−85% after 8 h. In sharp contrast, a much slower release profile of our optimized Tf formulations—F5 and F6—is observed during the whole in vitro release process. Biphasic release patterns are observed for Gen Tfs with an early burst release of ∼30% of the drug load at 3 h followed by a sustained release for more than 48 h. The early burst release is mainly assigned to the drug adsorbed on the surface and poorly entrapped inside the Tfs core, which is easily diffused into the release medium (Manickavasagam et al., ). Another reason for this rapid onset release is the small particle size of the Tfs that resulted in larger surface to volume ratio and higher release rate for drug located near to Tfs surface (Danaei et al., ). However, the following sustained release phase is principally ascribed to the erosion of the transfersomes and then the diffusion of the well-encapsulated Gen (Xiao et al., ). The solid matrix of the Tfs is also responsible for drug immobilization which supports the slow release rate of Gen. Impressively, the biphasic drug release pattern is expected to enhance the therapeutic proficiency of Gen. Because the poorly entrapped drug on the Tf surface will be ready to be absorbed and then will produce immediate response upon application. However, the well-encapsulated Gen portion will preserve the continuous drug response for an extended time period (i.e. rapid onset and long duration of action) (Malviya et al., ). As illustrated in , the drug release profiles of both the free Gen and our prepared Tf dispersion were best fitted with the Weibull model, which has the highest correlation coefficient values of 0.958, 0.936 and 0.868 for the free Gen, F5 and F6 dispersions, respectively. In Weibull model’s, β value is used to describe the drug release mechanism (Maji et al., ). Fickian diffusion is anticipated if the β value is ≤ 0.75 while 0.75 < β < 1 is indicative of a combined Fickian diffusion and Case II transport mechanism. Nonetheless, for β > 1, the drug release follows a complex release mechanism. Our data show β values of 0.624, 0.421 and 0.337 for the free Gen, F5 and F6 dispersions, respectively which demonstrates that Gen release follows a Fickian diffusion mechanism (Corsaro et al., ). Drug release profiles of the selected transferosomal formulations F5 and F6 in either CS or CP940 hydrogels compared to the corresponding Gen gel are presented in . For chitosan gels, as is evident from Gen-loaded CS gel shows faster release rate with 93.10 ± 3.78% drug release after 48 h compared to F5 and F6 Tf CS hydrogels, which show 47.6 ± 0.9% and 47.45 ± 1.38% drug release, respectively after 48 h. Similar results are observed with CP940-based hydrogels. Gen-loaded CP940 gel illustrates rapid Gen release of 96.9 ± 3.66% after 48 h compared to F5 and F6 Tf CP940 hydrogels which present 64.9 ± 3.59% and 61.1 ± 0.66% drug release, respectively after 48 h . These results demonstrate that the hydrogel-based transferosomal formulations exhibit highly controlled drug release rate compared to the formulations that lack Tfs. This observation is likely attributed to two main reasons: the first is the solid matrix of the transferosomal gel which is responsible for drug immobilization and subsequently slower drug release than the Gen hydrogel (Maji et al., ). The second reason for that sustained drug release of Gen Tf hydrogels, is the extra step included in the drug release mechanism. To be released, Gen has to first diffuse from the Tfs to get out of its vesicular structure then it has to diffuse through the viscous gel matrices to reach the release membrane. This extra step greatly extends its release from Tf hydrogels. The release kinetics of Gen from different formulations were evaluated in vitro to determine the efficacy of the transfersomes as a prospective topical drug delivery system. The drug release data obtained from the optimized Tf formulations was evaluated using various release kinetics models such as: zero order; first order; Higuchi; Weibull and Korsmeyer-Peppas and the results were listed in . In case of chitosan hydrogels, relying on the determination coefficients values, Weibull model is determined to be the best fit for the release data. The calculated β values are ≤ 0.75 (0.624, 0.421 and 0.337 for the Gen-loaded CS hydrogel, Gen-loaded F5 and F6 Tf CS hydrogels, respectively), demonstrating a Fickian diffusion-controlled drug release mechanism (Sharma et al., ). Similar results are observed for CP940 formulations where the best fit model is Weibull model except for F6 Tf CP940 hydrogel which determined to follow the Korsmeyer-Peppas model with n value of 0.214 that presents a Fickian (≤0.5, non-steady) diffusion-controlled release mechanism (Hady et al., ). 3D Cell culture ex vivo simulation of skin melanoma for evaluation of the optimized Gen Tf formulations antitumor activity WM164 skin melanoma cells were allowed to grow in 3D spheroids designed to mimic the complex skin tumor microenvironment using medium stiffness Col-Tgel. Previous studies demonstrated that Col-Tgel is appropriate for in vitro tumor spheroids bioengineering because it resembles the complexity of the tumor matrix. The degree of stiffness of the Col-Tgel depends mainly on the type of the studied tumor (Fang et al., ). In the current study, medium stiffness Col-Tgel was selected to match the stiffness of skin melanoma (101Bio, ). After seeding the Col-Tgel with WM164 cells, the spheroids were allowed to grow for 3 days before any treatment because of the expected prolonged doubling time and the slower metabolic rate within the tumor spheroids microenvironment (Fang et al., ). After the formation of well-developed WM164 tumor spheroids, Gen antitumor activity was evaluated using two different techniques (cell viability assay and live/dead cell assay). Cell viability assay is used to evaluate the cell viability on the tumor surface because reagent color change is only measured in the medium surrounding the tumor spheroids. However, live/dead cell assay is used mainly to evaluate the antitumor activity inside the tumor mass and determine the ability of the formulation to penetrate into deeper area inside the solid tumor. 3.5.1. Cell viability describes the percentage cell viability quantification results of the tested formulations using CCK-8 assay kit. The results show that the blank F6 Tfs has a comparable percentage cell viability result (100.79 ± 4.90%) to that of the untreated tumor spheroids (medium only, 100% cell viability) with no significant statistically difference ( p = .9997). These results confirm the safety and biocompatibility of our transfersome ingredients. On the other hand, comparing the percentage cell viability of Gen F6 Tfs (13.33 ± 1.044%), Dacarbazine solution (5.79 ± 0.025%) and Gen solution (5.65 ± 0.115%) with the untreated tumor spheroids (100 ± 4.71%), our results show very high significant differences ( p < .0001), which demonstrate the antitumor activities of these formulations. Our results also illustrate that there is no statistically significant difference in the percentage cell viability between Dacarbazine and Gen solutions ( p = .9999). These data demonstrate that Gen has a comparable antitumor activity to that of the positive control. In contrast, there are significantly lower percentage cell viability was detected upon comparing Gen Tfs with Dacarbazine solution ( p = .0018) and Gen solution ( p = .0011). The lower efficacy of Gen Tfs compared to Dacarbazine and Gen solutions is expected to be mainly due to two factors: the first is the direct effect of the drug available in solution on the cells located superficially on the surface of the tumor spheroids which is higher in case of drug solutions than in case of transfersomes. The second factor is the encapsulation of Gen inside the lipid structure of the transfersomes that requires longer time to be released and exert its antitumor effect. Depending on these two factors, we can deduce that, because the cell viability study is designed to evaluate the number of viable cells on the surface of the tumor spheroids, we expect that the drug solutions will have higher antitumor activity than transfersomes because of the availability of high amount of free drug molecules in the solutions. 3.5.2. Live/dead cell assay illustrates the confocal microscope images for the tumor spheroids treated with different formulations with the dead cell column (stained red) and the total cell column (stained green). demonstrates the quantitative evaluation of the percentage cell death (antitumor effect) of different formulations. The results show that the blank F6 Tfs has a comparable percentage cell death count to that of the untreated tumor spheroids with no significant statistical difference ( p = .849). These results confirm the safety and biocompatibility of our transfersome ingredients and agreed with the percentage cell viability results. On the other hand, comparing the percentage cell death of Gen F6 Tfs (117.6 ± 4.78%), Dacarbazine solution (100 ± 2.46%) and Gen solution (84.1 ± 12.12%) with the untreated tumor spheroids show high significant differences ( p < .001), which demonstrate the antitumor activities of these formulations. Also, the results show that there is no statistically significant difference in the cell death percentage between Dacarbazine and Gen solutions which confirms that Gen has a comparable antitumor activity compared to the positive control ( p = .589). In contrast to the cell viability test results, the results of the cell death present a significantly higher antitumor effect of Gen Tfs compared to both Dacarbazine and Gen solutions ( p < .05). This higher efficacy of Gen Tfs compared to the drug solution is likely due to the synergetic effect of the Tfs components and the drug. Among these components is the edge activator (TPGS) which provides Tfs with its hydrophilic, ultra-deformable and flexible structure that provides it with high penetration ability through deeper area inside the tumor spheroid core. Other Tfs components like the phospholipid (Hmingthansanga et al., ) and span 60 (Mady et al., ) are known to be permeability enhancers which also could provide a synergetic effect. The higher antitumor efficacy of Gen Tfs may be also attributed to the small particle size of the Tfs that enables for higher penetration through the tumor and longer retension inside it, which resulted in better efficacy of the chemotherapeutic agent (Maeda, ; Caracciolo, ). Another reason for the higher efficacy of Gen Tfs, is its negative zeta potential value that enables better penetration and entrapment into the tumor tissues (Miatmoko et al., ). Several previous researches proved the penetration enhancing ability of transfersomes through skin melanoma tissue due to its flexible and ultra-deformable structure which enable them to compress themselves to less than one-tenth of their own size to easily pass though the tissues (Jiang et al., ; Demartis et al., ). The TPGS incorporated in our transfersomes acts as an edge activator that provided Tfs with these flexible and ultra-deformable behavior that allows them to easily penetrate into deeper area in the tumor spheriods. In conclusion, our data demonstrate that Gen Tfs is a promising drug delivery system for the treatment of skin melanoma (Maji et al., ). Cell viability describes the percentage cell viability quantification results of the tested formulations using CCK-8 assay kit. The results show that the blank F6 Tfs has a comparable percentage cell viability result (100.79 ± 4.90%) to that of the untreated tumor spheroids (medium only, 100% cell viability) with no significant statistically difference ( p = .9997). These results confirm the safety and biocompatibility of our transfersome ingredients. On the other hand, comparing the percentage cell viability of Gen F6 Tfs (13.33 ± 1.044%), Dacarbazine solution (5.79 ± 0.025%) and Gen solution (5.65 ± 0.115%) with the untreated tumor spheroids (100 ± 4.71%), our results show very high significant differences ( p < .0001), which demonstrate the antitumor activities of these formulations. Our results also illustrate that there is no statistically significant difference in the percentage cell viability between Dacarbazine and Gen solutions ( p = .9999). These data demonstrate that Gen has a comparable antitumor activity to that of the positive control. In contrast, there are significantly lower percentage cell viability was detected upon comparing Gen Tfs with Dacarbazine solution ( p = .0018) and Gen solution ( p = .0011). The lower efficacy of Gen Tfs compared to Dacarbazine and Gen solutions is expected to be mainly due to two factors: the first is the direct effect of the drug available in solution on the cells located superficially on the surface of the tumor spheroids which is higher in case of drug solutions than in case of transfersomes. The second factor is the encapsulation of Gen inside the lipid structure of the transfersomes that requires longer time to be released and exert its antitumor effect. Depending on these two factors, we can deduce that, because the cell viability study is designed to evaluate the number of viable cells on the surface of the tumor spheroids, we expect that the drug solutions will have higher antitumor activity than transfersomes because of the availability of high amount of free drug molecules in the solutions. Live/dead cell assay illustrates the confocal microscope images for the tumor spheroids treated with different formulations with the dead cell column (stained red) and the total cell column (stained green). demonstrates the quantitative evaluation of the percentage cell death (antitumor effect) of different formulations. The results show that the blank F6 Tfs has a comparable percentage cell death count to that of the untreated tumor spheroids with no significant statistical difference ( p = .849). These results confirm the safety and biocompatibility of our transfersome ingredients and agreed with the percentage cell viability results. On the other hand, comparing the percentage cell death of Gen F6 Tfs (117.6 ± 4.78%), Dacarbazine solution (100 ± 2.46%) and Gen solution (84.1 ± 12.12%) with the untreated tumor spheroids show high significant differences ( p < .001), which demonstrate the antitumor activities of these formulations. Also, the results show that there is no statistically significant difference in the cell death percentage between Dacarbazine and Gen solutions which confirms that Gen has a comparable antitumor activity compared to the positive control ( p = .589). In contrast to the cell viability test results, the results of the cell death present a significantly higher antitumor effect of Gen Tfs compared to both Dacarbazine and Gen solutions ( p < .05). This higher efficacy of Gen Tfs compared to the drug solution is likely due to the synergetic effect of the Tfs components and the drug. Among these components is the edge activator (TPGS) which provides Tfs with its hydrophilic, ultra-deformable and flexible structure that provides it with high penetration ability through deeper area inside the tumor spheroid core. Other Tfs components like the phospholipid (Hmingthansanga et al., ) and span 60 (Mady et al., ) are known to be permeability enhancers which also could provide a synergetic effect. The higher antitumor efficacy of Gen Tfs may be also attributed to the small particle size of the Tfs that enables for higher penetration through the tumor and longer retension inside it, which resulted in better efficacy of the chemotherapeutic agent (Maeda, ; Caracciolo, ). Another reason for the higher efficacy of Gen Tfs, is its negative zeta potential value that enables better penetration and entrapment into the tumor tissues (Miatmoko et al., ). Several previous researches proved the penetration enhancing ability of transfersomes through skin melanoma tissue due to its flexible and ultra-deformable structure which enable them to compress themselves to less than one-tenth of their own size to easily pass though the tissues (Jiang et al., ; Demartis et al., ). The TPGS incorporated in our transfersomes acts as an edge activator that provided Tfs with these flexible and ultra-deformable behavior that allows them to easily penetrate into deeper area in the tumor spheriods. In conclusion, our data demonstrate that Gen Tfs is a promising drug delivery system for the treatment of skin melanoma (Maji et al., ). Stability studies of the optimized Gen Tf formulations Regarding the physical stability of Tfs dispersion after storage for 12 months at different temperatures, there were no observable signs of physical changes in both medicated and blank Tfs dispersions such as precipitation or aggregation. The improved physical stability of transfersomes drug delivery system may be due to the the increased elasticity and surface hydrophilicity as a result of the presence of edge activator. This enhanced elasticity and surface hydrophilicity prevent agglomeration and coalescence due to osmotic stress. This unique characteristic of transfersomes make them more advantageous than the traditional liposomes (Cevc, ; Matharoo et al., ). The particle size data of both medicated and blank Tfs are presented in and , respectively. The results demonstrate that our Tf formulations possess an excellent particle size stability with no significant changes in the particle size of the medicated Tfs after 1 year of storage at different temperatures ( p > .05). The average particle size of the fresh Gen Tfs was 168.867 ± 9.774 nm and after 12 months, the particle size was 164.278 ± 4.110 nm and 199.811 ± 14.539 nm for medicated formulations at 5° and 25 °C, respectively . Regarding the blank Tfs, the average particle size of the fresh sample was 160.392 ± 2.710 nm with no significant changes after 1 year of storage at 5 °C ( p > .05). In addition, at 25 °C there were no significant changes in the particle size of the blank Tfs during the 1st 6 months ( p > .05). However, after 6 months of storage there was a significant increase in the blank Tfs particle size ( p = .0090), which is likely due to particle aggregation as a result of long standing at higher temperature (Omar et al., ) as illustrated in . Despite the significant increase in the particle size of the blank Tfs after 6 months of storage at 25 °C, the particle size is still in the acceptable range for efficient delivery of the encapsulated drug into deeper layers of the skin (Verma et al., ). Also, it is reported that a particle size below 600 nm is essential to deliver the encapsulated drug into different layers of skin tumor (Danaei et al., ). Regarding the PDI values of Tfs demonstrated in and , although there were significant changes especially for the medicated Tfs at some time points, all the measured PDI values are still in the acceptable range for phospholipid-based vesicles (< 0.4) (Putri et al., ). It is known that PDI is an indicator of the quality of particle size distribution. It is clearly observed from our results ( and ) that our Tfs has a uniform and narrow particle size distribution which is indicative of a good physical stability and perfect performance at the tumor application site (Danaei et al., ). Concerning the zeta potential values of the prepared Gen Tfs that illustrated in and , the fresh Gen Tfs and blank Tfs samples possess high negative zeta potential values of −48.408 ± 1.253 mV and −58.356 ± 1.363 mV, respectively. After 12 months of storage at 5 °C, there were no significant changes in the measured zeta potential values of both medicated (−47.953 ± 0.903 mV) and blank Tfs (−55.547 ± 0.750 mV) ( p > .05). Similarly at 25 °C, there were no significant changes in the measured zeta potential values for the first 3 months for medicated Tfs and for the first 6 months for blank Tfs ( p > .05). However, there was an observed increase in the measured zeta potential negativity values at 25 °C after 6 months for medicated Tfs ( p = .0213) and after 9 months ( p = .0081) for blank Tfs as demonstrated in and , respectively. This observed increase in the negativity of the Tfs stored at 25 °C is likely due to the hydrolysis of the lipid contents of the prepared Tfs with the exposure of the negatively charged free carboxylic acid groups at higher temperature (Tamilvanan et al., ). Advantageously, this increase in the measured zeta potential negativity will provide more physical stability to the Tf formulations. This increase in the zeta potential values will also help to keep a smaller particle size and narrow particle size distribution (PDI) due to the increased and continuous repulsion between adjacent particles in the dispersion (Izak-Nau et al., ). Regarding the physical stability of Tf CP940 hydrogels after 12 months of storage at different temperatures, there were no observed changes in the hydrogel physical appearance or consistency at 5 °C for both medicated and blank Tf hydrogels. Similarly, there were no observed changes in the hydrogel physical appearance or consistency at 25 °C for both medicated and blank Tf hydrogels during the 1st nine months of the stability study. After nine months, there was an observed decrease in the hydrogel consistency for both medicated and blank Tf hydrogels. This change in consistency is likely due to the hydrolysis of the lipid contents of the prepared Tfs with the liberation of free fatty acids at higher temperature. These free fatty acids resulted in a decrease in the pH value of the CP940 hydrogels below its pKa value of about 5.5 that resulted in its conversion from gel to sol state as a pH triggered in situ gelling polymer (Gupta and Vyas, ). This change in the pH values of the hydrogels during the long term storage at 25 °C could be mitigated by preparing the hydrogels in a suitable biocompatible buffering system. The changes in the pH values of the prepared Tf hydrogels during storage for 12 months at different temperatures were illustrated in and . Initially, the pH values were 5.622 ± 0.036 and 5.608 ± 0.021 for medicated and blank Tf hydrogels, respectively. No significant changes were observed in the pH values of both medicated (5.578 ± 0.083) and blank (5.702 ± 0.031) formulations after storage for 12 months at 5 °C ( p > .05). Similarly, at 25 °C, there were no significant changes in the pH values for up to 3 months of storage for the medicated hydrogel and for up to 6 months for the blank hydrogel ( p > .05). In contrast, at the end of the study, there was an observed decrease in the pH values of the formulations stored at 25 °C for medicated (5.057 ± 0.077, p = .006) and blank (4.796 ± 0.039, p = .0002) hydrogels, respectively ( and ). This observed decrease in the pH values at elevated temperature is likely due to the release of free fatty acids as a result of hydrolysis and/or oxidative degradation of the transfersomes phospholipid components at higher temperature (Tamilvanan et al., ). Although there was a decrease in the measured pH values of the formulations stored at 25 °C for 1 year, according to the EEMCO guidelines, the formulations still have ideal pH values for semi-solid formulation intended for topical use (pH 4.5–7) (Novi et al., ; Parra et al., ; Lukić et al., ). The chemical stability of the medicated CP940 Tf hydrogel, expressed as drug content, is illustrated in . As is evident from our results, there were no significant changes in the formulations drug contents at any time point during the 1 year of storage regardless the storage temperature ( p > .05). This improved chemical stability of genistein may be due to its incorporation into this lipid based drug delivery system which provide additional protection for the incorporated active ingredient against possible degradation by oxidation, light and temperature (Matharoo et al., ). This data obviously indicated the very high chemical stability of genistein in the tested formulations at temperatures up to 25 °C, which recommends its use as a promising topical drug delivery system for the treatment of skin melanoma that could be stored in ambient temperature without any specific storage conditions. But as a more precautionary measure, we recommended the storage of the Tf hydrogels under refrigeration conditions to preserve the integrity of its lipids components and mitigate pH changes at ambient conditions. Conclusion Melanoma is the most dangerous form of skin cancer. Several reasons are making it more dangerous than other forms of skin cancer including high incidence; high chance of metastasis; high mortality rate; resistance to the available treatment options; and the possible side effects of these treatment options. All these factors make drug market in unmet urgent need for safe and effective treatment for skin melanoma. In the current study we succeeded to prepare, optimize and characterize a topical drug delivery system that meets that urgent need. Genistein, a well-known natural chemotherapeutic agent, was incorporated in a topical transfersome hydrogel to provide a safe and effective antitumor topical drug delivery system for treatment of skin melanoma. Transfersomes were selected as nanocarriers for that purpose because of its ultra-deformable structure that allows for penetration of more drug into deeper layers of the tumor with higher antitumor activity. Our optimized formulations possessed perfect characteristics that enable them to provide excellent antitumor properties such as tiny particle size with narrow particle size distribution, spherical shape and sustained release behavior. Evaluation of the antitumor activity of genistein transfersomes on 3D melanoma spheroids model proves that genistein is a potent chemotherapeutic agent. The antitumor activity of genistein was also improved upon incorporation into transfersomes because of the synergetic effect exerted by the Tfs components that enhanced penetration ability of the drug into the different layers of melanoma spheroids. In addition to the enhanced chemotherapeutic activity, our formulations possess a good shelf-life physical and chemical stability. In conclusion, our genistein transfersome hydrogel could serve as a promising topical drug delivery system for treatment of skin melanoma. In order to support this proposal, further future studies are required for our formulation to be ready for in vivo and/or clinical evaluation. Examples of these future studies is to test the transfersomes safety, efficacy and integrity upon long term storage. Tfs integrity could be tested by evaluating drug encapsulation and drug release behavior of aged formulation. |
Application of microfluidic technology and nanoencapsulation to amplify the antibacterial activity of clindamycin against a food born pathogen | 0811dd25-7cb5-49e2-8dd1-d2faef5ad910 | 11825678 | Microbiology[mh] | Biological contaminants that can result in foodborne illnesses are known as foodborne pathogens, including bacteria, viruses, and parasites. The emergence of two or more cases of a similar illness brought on by the consumption of a food is known as a foodborne disease outbreak. Escherichia coli ( E. coli ) is among the more frequently reported food pathogens that is found normally in warm-blooded creatures’ lower intestines. While most strains of E. coli are not harmful, some can result in severe food poisoning . The bacteria known as Shiga toxin-producing E. coli (STEC) is capable of causing serious foodborne illnesses. E. coli may evolve in vegetables and other foods and live in the environment for long period of time . Since E. coli is the most prevalent Gram-negative infection in humans, antibiotic resistance in this pathogen is especially concerning . Consequently, it is crucial to use natural compounds to combat antibiotic resistance since they present an acceptable replacement for conventional antibiotics, which are losing their efficacy as a result of the emergence of resistant bacteria. It has been demonstrated that natural compounds possessing antibacterial qualities can offer a viable path forward for the development of novel therapies . The antibacterial effects of natural compounds EOs and the synergistic effect of two or more EO components , have been distinguished by several researchers. The antibacterial properties of EOs result from components that act synergistically or additively, acting at multiple sites of action at the cellular level rather than through a single mechanism . Recently, much consideration has been given to the isolation and utilization of new bioactive compounds with antioxidant and or antimicrobial activity from botanical sources . For example, Nostro and Papalia examined and confirmed the antibacterial activity of carvacrol against a variety of food-borne pathogens and microorganisms . As an advanced strategy in this area, our team has already published information on the EOs’ enhanced antibacterial activity through the use of nanoemulsification techniques – . Nanoemulsions are defined as fine colloidal dispersions of water-in-oil or oil-in-water droplets within the size range of 10–600 nm, used in pharmaceuticals and biomedical industries . Nanoemulsions are showing promising horizons for the development of novel cosmetics, diagnostics, and pharmaceuticals as well as biotechnology products. The terms submicron emulsion (SME), small emulsion, and ultrafine emulsion are also used synonymously. Nanoemulsions usually consist of heterogeneous mixtures of lipid and aqueous phases wherein stability is achieved through the use of suitable materials known as emulsifiers , . The use of nanoemulsions as delivery systems has been shown to extend the residence time of drugs in the body. Previous studies have shown the use of nanoemulsion technology to improve the bioavailability of lipophilic drugs and antibiotics . Simultaneous administration of existing antibiotics and essential oils (EOs) has been experimentally investigated as an alternative strategy for treating infections caused by drug-resistant bacteria . Some studies have confirmed the synergistic and/or additive effects of EOs and some antibiotics against bacteria . Indeed, nanoemulsification provides a large contact surface area. This is more intensified when a highly efficient microfluidic chip is employed to attain a high contact area between bioactive compounds and cell membranes. When it comes to detecting antimicrobial activity, microfluidic devices have several advantages over traditional techniques including fast antibacterial compound testing in less than an hour, small quantities of the required reagents and medium and versatile microfluidic designs for special purposes. Furthermore, microfluidic devices can support modular detection techniques and comprehensive cultivation regimens, enabling more accurate and effective investigation of the bacterial cell membrane interaction with antibacterial agents. This is in contrast to the conventional techniques, which frequently need more intricate handling and longer incubation times. Moreover, visual investigation of the bacterial status upon treatment is possible using microfluidic technology. Incorporating appropriate natural compounds along with an antibiotic such as clindamycin in nanoemulsions can be considered a practical solution to increase the contact surfaces and hence the antibacterial activity of antibiotics. This study aims to investigate the possibility of formulating a stable Mentha piperita essential oil/clindamycin nanoemulsion (MEO/C NE) for antibacterial activity experiments. At first, the effect of important formulation variables, including surfactant, EO, and clindamycin concentrations on the nanoemulsion properties like mean particle size and stability was studied using the Response Surface Methodology (RSM) technique. Then, the application of a microfluidic chip for evaluating the antibacterial activities of nanoemulsions against E. coli bacterium was considered and compared to the conventional method. The amounts of potassium, nucleic acid, and protein release were scrutinized to assess the extent of bacterial cell destruction. The microscopic structural changes elucidated the morphology of nanoemulsion and bacteria strains after the microfluidic treatment and conventional morphology technique. Material and method The study used E. coli ATCC 25,922 in 2X Nutrient Broth containing 15% v/v Glycerol supplied as a cryoprotectant, the organisms were kept as freezer stock at – 20 °C. Before starting any experiment, a fresh culture was created on a Nutrient agar plate (which CONDA Pronadisa sold). Imports from Merck, Millipore (Darmstadt, 93 Germany) were phosphate-buffered saline (PBS), neutral red, and non-ionic surfactants Span 80 and Tween 80. Clindamycin base (Caspian Tamin CO., Iran) and Mentha piperita EO (Tabibdaru CO., Kashan, Iran) were purchased. A high-speed homogenizer (SilentCrusher M, Heidolph) from Germany, a Light Microscope (Carl Zeiss Microscopy GmbH, Jena, Germany) were used to detect the bacteria. Scanning Electron Microscope (Hitachi S-4700, Tokyo, Japan), Dynamic Light Scattering (DLS) device (Nanophox Sympatec GmbH, Claushtal, Germany), and UV–visible spectrophotometer (Bio-photometer, Eppendorf AG, Hamburg) from Germany were utilized for further experiments. Design of experiment The implementation of design of experiments (DOE) including orthogonal array design, Box-Behnken design (BBD), and central composite design (CCD) to optimize technical processes and formulations is expanding in research laboratories. Based on some initial testing, the appropriate ranges including surfactant 3.0, 4.0, 5.0 w/w%, essential oil 2.0, 3.0, 4.0 w/w%, and clindamycin 0.01, 0.055, 0.10 w/w% were chosen. 17 experiments were selected based on BBD, including 5 central point replications (Table ) to estimate reasonably the experimental error. It should be noted that each characterization test of formulations was repeated three times and the average values were reported. The quadratic regression model was executed to predict the relationship between the nanoemulsion droplet size and variables as demonstrated in Eq. : 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Y}}\,=\,{\beta _0}\,+\,{\beta _{\text{1}}}{{\text{X}}_{\text{1}}}\,+\,{\beta _{\text{2}}}{{\text{X}}_{\text{2}}}\,+\,{\beta _{\text{3}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{12}}}}{{\text{X}}_{\text{1}}}{{\text{X}}_{\text{2}}}\,+\,{\beta _{{\text{13}}}}{{\text{X}}_{\text{1}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{23}}}}{{\text{X}}_{\text{2}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{11}}}}{{\text{X}}_{\text{1}}}^{{\text{2}}}\,+\,{\beta _{{\text{22}}}}{{\text{X}}_{\text{2}}}^{{\text{2}}}\,+\,{\beta _{{\text{33}}}}{{\text{X}}_{\text{3}}}^{{\text{2}}}$$\end{document} where Y denotes the droplet size (nm); β 0 is the intercept coefficient, β 1 , β 2 and β 3 stands with the main effect coefficients, β 11 , β 22 , and β 33 are the squared coefficients, and β 12 , β 13 , and β 23 show the interaction coefficients. Design-Expert Software (State-Ease Inc., version 7.0.0) was used to analyze the experimental data and evaluate the predicted responses. Nanoemulsion preparation Clindamycin-loaded nanoemulsions were prepared by dissolving clindamycin in the EO and Span and then mixing in deionized water containing a non-ionic surfactant. A solution of Tween80 in deionized water (the aqueous phase) was added to the mixture of clindamycin, EO, and Span80 (the oil phase). A high-speed homogenizer was used to prepare the nanoemulsion samples . The whole mixture was homogenized under the stirring rate of 19,000 rpm for 20 min. The mean particle diameter of the nanoemulsions stored at – 4 °C was scanned over 3 months to determine the stability of the nanoemulsion samples. Microfluidic system The techniques of microchip design and fabrication were the subject of an extensive report in our previous work . The microchannel made of PDMS (Dow Corning Crop, USA) was created using the microlithography method. SU-8 photoresist film was used in photolithography to create the master molds. The PDMS liquid (Sylgard 184) and PDMS curing agent were then carefully blended well in a 10:1 ratio and poured onto the SU-8 molds. PDMS began to separate from the molds as it was curing at 90 °C for 30 min. In the end, the oxygen plasma procedure was used to bind the two PDMS layers for 1 min at 8 mbar and 40 W. The inlets and outlets of the microchannels were created using a biopsy punch with a 1.25 mm width. E. Coli bacterium cultivation and preparation The preparation of the E. coli bacterium depicted in Fig. was described in detail in a previous study . Briefly, a colony was removed and added to sterilize (MHB) broth medium and incubated under aerobic conditions at 37 °C and 200 rpm, followed by centrifugation for 10 min at 4000 rpm at 4 °C. 1 mL of neutral red solution (0.04 mg in 100 mL) was added to the obtained extract followed by incubation in the shaker for 10 min at 37 °C. After centrifuging the resultant solution for ten minutes at 4000 rpm at 4 °C, 10 mL of PBS buffer was slowly added. The obtained solution has been injected into a microfluidic chip. In the quantitative tests, the bacterial suspension was injected directly into the device. It should be noted that the design and manufacture of the chip are described in detail in a previous study . Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) To determine the MIC, a continuous two-fold dilution of samples at concentrations of 10 to 0.005 mg/mL was prepared in sterile 96-well plates as described by the Clinical and Laboratory Standards Institute (CLSI). The MIC value was the concentration that completely inhibited the growth of bacteria (the first clear well). Using the microtiter broth dilution technique, the minimum bactericidal concentration (MBC) was recorded as the minimum sample concentration of 99.9% bacterial death after culturing at 37 °C for 24 h . Each experiment was repeated at least three times. Cell membrane integrity The integrity of bacterial cell membranes was assessed as described in detail in a previous study . Briefly, the integrity of the cell membrane was evaluated by monitoring the discharge of intracellular materials including nucleotides and proteins at 260 and 280 nm, respectively. Time kill assay A time-kill assay was performed to confirm the antibacterial effects of clindamycin, MEO, and MEO/C NE bacterial growth inhibitory. The samples were collected at four intervals (1, 2, 3, and 4 h) during the time-kill assay. The original bacterium solution was further diluted with a sodium chloride solution (0.085%W/V) to facilitate the counting of colony units. Subsequently, 100µL specimens were extracted from the dilutions. To facilitate the counting of colony units, a predefined amount of outflow was sampled in the microfluidic chip at the predetermined residence time. The colony counting unit underwent the same procedure. Following a 24-hour incubation period at 37 °C, the initial bacterial solution was serially diluted using a sodium chloride solution (0.085%W/V) before the plate colony count was carried out . Lastly, the bacterial cell viability and growth inhibition were calculated based on the following definitions: 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\text{P}\text{e}\text{r}\text{c}\text{e}\text{n}\text{t}\text{a}\text{g}\text{e}\:\text{o}\text{f}\:\text{v}\text{i}\text{a}\text{b}\text{i}\text{l}\text{i}\text{t}\text{y}\:=\frac{\text{N}\text{u}\text{m}\text{b}\text{e}\text{r}\:\text{o}\text{f}\:\text{c}\text{o}\text{l}\text{o}\text{n}\text{i}\text{e}\text{s}\:\text{c}\text{o}\text{u}\text{n}\text{t}\text{e}\text{d}}{\text{N}\text{u}\text{m}\text{b}\text{e}\text{r}\:\text{o}\text{f}\:\text{c}\text{o}\text{n}\text{t}\text{r}\text{o}\text{l}\:\text{c}\text{o}\text{l}\text{o}\text{n}\text{i}\text{e}\text{s}\:}\times\:100$$\end{document} The study used E. coli ATCC 25,922 in 2X Nutrient Broth containing 15% v/v Glycerol supplied as a cryoprotectant, the organisms were kept as freezer stock at – 20 °C. Before starting any experiment, a fresh culture was created on a Nutrient agar plate (which CONDA Pronadisa sold). Imports from Merck, Millipore (Darmstadt, 93 Germany) were phosphate-buffered saline (PBS), neutral red, and non-ionic surfactants Span 80 and Tween 80. Clindamycin base (Caspian Tamin CO., Iran) and Mentha piperita EO (Tabibdaru CO., Kashan, Iran) were purchased. A high-speed homogenizer (SilentCrusher M, Heidolph) from Germany, a Light Microscope (Carl Zeiss Microscopy GmbH, Jena, Germany) were used to detect the bacteria. Scanning Electron Microscope (Hitachi S-4700, Tokyo, Japan), Dynamic Light Scattering (DLS) device (Nanophox Sympatec GmbH, Claushtal, Germany), and UV–visible spectrophotometer (Bio-photometer, Eppendorf AG, Hamburg) from Germany were utilized for further experiments. The implementation of design of experiments (DOE) including orthogonal array design, Box-Behnken design (BBD), and central composite design (CCD) to optimize technical processes and formulations is expanding in research laboratories. Based on some initial testing, the appropriate ranges including surfactant 3.0, 4.0, 5.0 w/w%, essential oil 2.0, 3.0, 4.0 w/w%, and clindamycin 0.01, 0.055, 0.10 w/w% were chosen. 17 experiments were selected based on BBD, including 5 central point replications (Table ) to estimate reasonably the experimental error. It should be noted that each characterization test of formulations was repeated three times and the average values were reported. The quadratic regression model was executed to predict the relationship between the nanoemulsion droplet size and variables as demonstrated in Eq. : 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Y}}\,=\,{\beta _0}\,+\,{\beta _{\text{1}}}{{\text{X}}_{\text{1}}}\,+\,{\beta _{\text{2}}}{{\text{X}}_{\text{2}}}\,+\,{\beta _{\text{3}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{12}}}}{{\text{X}}_{\text{1}}}{{\text{X}}_{\text{2}}}\,+\,{\beta _{{\text{13}}}}{{\text{X}}_{\text{1}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{23}}}}{{\text{X}}_{\text{2}}}{{\text{X}}_{\text{3}}}\,+\,{\beta _{{\text{11}}}}{{\text{X}}_{\text{1}}}^{{\text{2}}}\,+\,{\beta _{{\text{22}}}}{{\text{X}}_{\text{2}}}^{{\text{2}}}\,+\,{\beta _{{\text{33}}}}{{\text{X}}_{\text{3}}}^{{\text{2}}}$$\end{document} where Y denotes the droplet size (nm); β 0 is the intercept coefficient, β 1 , β 2 and β 3 stands with the main effect coefficients, β 11 , β 22 , and β 33 are the squared coefficients, and β 12 , β 13 , and β 23 show the interaction coefficients. Design-Expert Software (State-Ease Inc., version 7.0.0) was used to analyze the experimental data and evaluate the predicted responses. Clindamycin-loaded nanoemulsions were prepared by dissolving clindamycin in the EO and Span and then mixing in deionized water containing a non-ionic surfactant. A solution of Tween80 in deionized water (the aqueous phase) was added to the mixture of clindamycin, EO, and Span80 (the oil phase). A high-speed homogenizer was used to prepare the nanoemulsion samples . The whole mixture was homogenized under the stirring rate of 19,000 rpm for 20 min. The mean particle diameter of the nanoemulsions stored at – 4 °C was scanned over 3 months to determine the stability of the nanoemulsion samples. The techniques of microchip design and fabrication were the subject of an extensive report in our previous work . The microchannel made of PDMS (Dow Corning Crop, USA) was created using the microlithography method. SU-8 photoresist film was used in photolithography to create the master molds. The PDMS liquid (Sylgard 184) and PDMS curing agent were then carefully blended well in a 10:1 ratio and poured onto the SU-8 molds. PDMS began to separate from the molds as it was curing at 90 °C for 30 min. In the end, the oxygen plasma procedure was used to bind the two PDMS layers for 1 min at 8 mbar and 40 W. The inlets and outlets of the microchannels were created using a biopsy punch with a 1.25 mm width. bacterium cultivation and preparation The preparation of the E. coli bacterium depicted in Fig. was described in detail in a previous study . Briefly, a colony was removed and added to sterilize (MHB) broth medium and incubated under aerobic conditions at 37 °C and 200 rpm, followed by centrifugation for 10 min at 4000 rpm at 4 °C. 1 mL of neutral red solution (0.04 mg in 100 mL) was added to the obtained extract followed by incubation in the shaker for 10 min at 37 °C. After centrifuging the resultant solution for ten minutes at 4000 rpm at 4 °C, 10 mL of PBS buffer was slowly added. The obtained solution has been injected into a microfluidic chip. In the quantitative tests, the bacterial suspension was injected directly into the device. It should be noted that the design and manufacture of the chip are described in detail in a previous study . To determine the MIC, a continuous two-fold dilution of samples at concentrations of 10 to 0.005 mg/mL was prepared in sterile 96-well plates as described by the Clinical and Laboratory Standards Institute (CLSI). The MIC value was the concentration that completely inhibited the growth of bacteria (the first clear well). Using the microtiter broth dilution technique, the minimum bactericidal concentration (MBC) was recorded as the minimum sample concentration of 99.9% bacterial death after culturing at 37 °C for 24 h . Each experiment was repeated at least three times. The integrity of bacterial cell membranes was assessed as described in detail in a previous study . Briefly, the integrity of the cell membrane was evaluated by monitoring the discharge of intracellular materials including nucleotides and proteins at 260 and 280 nm, respectively. A time-kill assay was performed to confirm the antibacterial effects of clindamycin, MEO, and MEO/C NE bacterial growth inhibitory. The samples were collected at four intervals (1, 2, 3, and 4 h) during the time-kill assay. The original bacterium solution was further diluted with a sodium chloride solution (0.085%W/V) to facilitate the counting of colony units. Subsequently, 100µL specimens were extracted from the dilutions. To facilitate the counting of colony units, a predefined amount of outflow was sampled in the microfluidic chip at the predetermined residence time. The colony counting unit underwent the same procedure. Following a 24-hour incubation period at 37 °C, the initial bacterial solution was serially diluted using a sodium chloride solution (0.085%W/V) before the plate colony count was carried out . Lastly, the bacterial cell viability and growth inhibition were calculated based on the following definitions: 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:\text{P}\text{e}\text{r}\text{c}\text{e}\text{n}\text{t}\text{a}\text{g}\text{e}\:\text{o}\text{f}\:\text{v}\text{i}\text{a}\text{b}\text{i}\text{l}\text{i}\text{t}\text{y}\:=\frac{\text{N}\text{u}\text{m}\text{b}\text{e}\text{r}\:\text{o}\text{f}\:\text{c}\text{o}\text{l}\text{o}\text{n}\text{i}\text{e}\text{s}\:\text{c}\text{o}\text{u}\text{n}\text{t}\text{e}\text{d}}{\text{N}\text{u}\text{m}\text{b}\text{e}\text{r}\:\text{o}\text{f}\:\text{c}\text{o}\text{n}\text{t}\text{r}\text{o}\text{l}\:\text{c}\text{o}\text{l}\text{o}\text{n}\text{i}\text{e}\text{s}\:}\times\:100$$\end{document} Optimization of nanoemulsion formulation A subclass of RSM, Box Behnken Design (BBD), was used to determine the most appropriate nanoemulsion droplet size targeting the highest stability and biological response. The droplet size of the nanoemulsion was calculated using the DLS procedure for the combination of the independent parameters, which includes the percentages of surfactant (A), EO (B), and (C) clindamycin given in Table . The mean droplet size was calculated as the average of three measurements. The BBD comprises a total of 17 proposed experimental runs, including five repeatable center points (Table ). The suggested correlations are shown in Eqs. and as functions of the independent variables for the droplet size, both in real and coded values. 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\text{Droplet size of nanoemulsion }}({\text{Actual}})\,= & \,{\text{Mean droplet size }}= - {\text{1}}0{\text{4}}.{\text{94593}}+{\text{ 67}}.{\text{45111 }} \times {\text{ Surfactant}}\,+\,{\text{3}}0.{\text{91194}}~ \\ & \times {\text{ Essential Oil}}\,+\,{\text{17}}.{\text{63519 }} \times {\text{ Clindamycin}} - \,{\text{5}}.{\text{925}}00 \times {\text{ Surfactant }} \\ & \times {\text{ Essential Oil}}\,+\,{\text{5}}.{\text{38889}} \times {\text{ Surfactant }} \times {\text{ Clindamycin}}\,+\,{\text{11}}.{\text{55556}} \\ & \times {\text{ Essential Oil }} \times {\text{ Clindamycin}} - \,{\text{7}}.{\text{23}}000 \times {\text{ Surfactan}}{{\text{t}}^{\text{2}}}\,+\,0.{\text{645}}00{\text{ }} \\ & \times {\text{ Essential Oi}}{{\text{l}}^{\text{2}}} - \,{\text{41}}.{\text{25926}} \times {\text{Clindamyci}}{{\text{n}}^{\text{2}}} \\ \end{aligned}$$\end{document} 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\text{Droplet size of nanoemulsion }}({\text{Coded}})\,= & \,{\text{Mean droplet size}}\,=\,+\,{\text{1}}0{\text{4}}.{\text{76}}--{\text{5}}.{\text{2}}0 \times {\text{A}}\,+\,{\text{17}}.{\text{44}} \times {\text{B}}\,+\,{\text{12}}.{\text{81}} \times {\text{C}} - \,{\text{5}}.{\text{93}} \times {\text{A}} \\ & \times {\text{B}}\,+\,{\text{2}}.{\text{43}} \times {\text{A}} \times {\text{C}}\,+\,{\text{5}}.{\text{2}}0 \times {\text{B}} \times {\text{C }} - \,{\text{7}}.{\text{23}} \times {{\text{A}}^{\text{2}}}\,+\,0.{\text{65}} \times {{\text{B}}^{\text{2}}} - \,{\text{8}}.{\text{36}} \times {{\text{C}}^{\text{2}}} \\ \end{aligned}$$\end{document} Table displays the ANOVA statistical analysis for the regression model. From Table , the low p-value ( p < 0.0002) and /or high F-value ( F = 24.24) p- confirm the statistical significance of the predicted model. Also, the Lack of Fit index is not significant (0.1309) relative to the pure error indicating the model fits well and the independent variable has considerable effects on the response. The other statistics of the model like the coefficient of determination ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:{R}^{2}$$\end{document} ) was 0.9689, Adjusted R 2 = 0.9289, adequate precision = 17.766, and predicted R 2 = 0.6274, standard deviation = 4.68, coefficient of variation (C.V%) = 4.79, suggesting a strong connection between experimental and projected values. As a result, the resulting model is sufficient to draw a response surface curve and predict stability conditions for maximizing the stability values. By extension, Fig. a. displays a plot of the actual outcomes versus what was anticipated. It helps identify a value or set of data that the regression model is unable to correctly forecast. These graphs demonstrate how well the expected values match the experimental data. The residual plots versus the fitted responses, or a curve of the residuals against the expected response values, are also displayed in Fig. b. Plotting should continue with random scatter, that is, a fixed range of residuals throughout the graph. The findings collected validate the proposed model’s correctness. The discrepancy between the actual information and the expected response is shown in terms of the ordinary probability plot of residuals in Fig. c. This indicates whether the data points must follow a straight line for the residuals to mimic a normal distribution (Fig. c). The points follow a straight line that is surrounded by a normal distribution, as seen in Fig. c. Interactive effect of droplet size of NE formulation factors The ultimate droplet size is significantly influenced by the concentration of the constituents that make up the nanoemulsion. High surfactant concentrations have been suggested to raise the risk of toxicity in food , . Herein, the EO, clindamycin, and surfactant percentages were altered to scrutinize the suitable NEs-based clindamycin droplet size. The quadratic polynomial models’ 3D response surface contour plots, which depict the relationships between interacting independent and dependent variables, are displayed in Fig. . Figure a illustrates the mutual effect of surfactant and essential oil percentages on the droplet size of the MEO/C NE. In Fig. a, a significant interaction between the surfactant and essential oil variables is considered. At the lower constant value of surfactant (3 to 4 w/w%) increasing the essential oil leads to a significant increase in the droplet size, while when surfactant is fixed at a higher value (4.5 to 5 w/w%) there is no remarkable change in mean droplet size. In addition to reducing the droplet size, increasing the essential oil concentration is the main goal of this study. Concentrations can be moderated to see the noticeable effects of essential oils. In this case, the droplet size will also be moderate. The ideal situation occurs when the essential oil and surfactant amounts are in the middle of the curve because a lower surfactant proportion is advantageous. These findings corroborate those of who proposed that an increase in surfactant % causes a decrease in the droplet size of cinnamon oil nanoemulsion. Figure b shows the impact of clindamycin concentration and surfactant on the MEO/C NE droplet size. As observed, with the simultaneous increase in clindamycin concentration at constant surfactant concentration, the droplet size increased significantly, which could refer to the clindamycin droplet size. Also, increasing the surfactant concentration to a specific amount leads to a larger droplet size until a peak is reached, then with more increasing concentration the droplet size can be reduced. Surfactants typically lower the free energy required for the generation of nanoemulsions. By lowering the surface tension at the water-oil interface, surfactants which are made up of both hydrophilic and hydrophobic components interlink the space between the aqueous and oil phases , . This implies that a decrease in the nanoemulsion droplet size could result from raising the Tween 80 content. Figure c shows the interaction of essential oil and clindamycin on the mean droplet size of MEO/C NE. From Fig. c the saddle-type curve implies that an increase in both clindamycin and essential oil concentrations leads to larger droplet size. The computed p-values in Table confirm that the influence of the independent variable is statistically significant. Optimization of the droplet size of nanoformulation The regression model was employed to attain the optimal values for the selected factors. Three various proposed cases by DOE software are re-performed to verify the accuracy of the fitted correlation (Table ). It was shown that both experimental and predicted values are in close agreement, approving that the fitted model is reliable. The DLS plots associated with the mean particle diameter of the nanoemulsions are 75.46 ± 3.2 nm, 73.62 ± 4.4 nm, and 99.35 ± 2.6 nm illustrated in Fig. S2a, b, and c. To assess the encapsulation efficacy percentage (EE%) of clindamycin in nanoparticles, a colorimetric method using ferric acid ion as a colorimetric agent has been utilized . Briefly, stock solutions of clindamycin and ferric ion were prepared, as described in supporting information. Then, to plot a calibration curve, clindamycin-ferric ion standards ranging from 10 to 250 µg/mL were prepared, and the UV absorption in 590 was evaluated (Fig. S4). The prepared nanoemulsion was separated into two distinct phases (organic and aqueous) by adding KCl (Fig. S5) as described by Klaus et al. . By adding the ferric chloride solution to the organic phase and upon color change, the absorption was evaluated at 590 nm and the concentration was calculated. The encapsulation EE% was calculated resulting in 59.63% W/W. After three months of storage at 4 °C, it was demonstrated that the mean particle diameter had not significantly changed, indicating the long-term storage stability (Fig. ). Following three months of storage, the produced nanoemulsions’ polydispersity index (PDI) was determined to be 0.185, 0.183, and 0.210. The morphology of the prepared nanoemulsion at optimum condition No. 1 is shown in Fig. a, which was obtained using the Transmission electron microscopy (TEM) (Philips-CM30) instrument. The samples’ droplet sizes were spherical and consistent. In addition to Fig. b, number frequency histograms display the nanoemulsion’s particle size distribution for optimal condition No. 1 with a droplet size of 64.7 nm on a linear scale. The zeta potential of the prepared nanoemulsions (No 1, 2, and 3) was calculated − 16, − 15.4, and − 15.3 mV respectively (Fig. S3). Effect of different formulated samples on bacterial inhibition The MIC and MBC tests for antibacterial activities of prepared samples (Table ) were evaluated against the E. coli bacterium. The MIC values for (MEO/C NE1), (MEO/C NE2), (MEO/C NE3) exhibited that the optimized formulations are in the same order of magnitude, 0.0195, 0.0390 and 0.0195 mg/mL Comparing MEO NE indicates the effect of nanoemulsification to improve antibacterial activity. However, as can be observed, these values are considerably lower compared to the pure MEO and clindamycin, indicating an enhanced inhibitory effect, which can be ascribed to the impact of the nanoemulsion system as well as active compounds. Other biological studies were conducted after the compound (MEO/C NE1) was determined to be the best experimental formulation based on the obtained data, which showed no discernible variation in the MIC of the optimal points. Other reports, confirm that the antibacterial activity is related to the impact of physicochemical characteristics of essential oil NEs and particle size , . It has also been reported that as the droplet size decreases, the electrostatic interaction with the bacterial membrane increases, thereby producing stronger antibacterial potency . From Table , similar trends were observed for MBC values using different test samples. Similar results were reported for ginger oil emulsions and, lemongrass oil emulsions . By binding to the lipid and protein components of the bacterial membrane, nanoemulsions penetrate cell walls and cytoplasmic membranes, resulting in cytolytic and intracellular toxic effects. In other words, the possible mechanism underlying with antibacterial compounds of EO is that they react with the phospholipid components of the cell wall converting them into other compounds such as glycerol and phosphoric acid. Due to this conversion, phospholipid layer can no longer maintain the shape of the cell membrane, consequently, leakage occurs in the cell membrane of the bacteria. It can be seen morphologically in the ghost cell shape of E. coli bacteria as a consequence of this reaction which gives rise to shrinking in the cell membrane and causes the cell to lysis. This process somewhere is also called spheroplast. In visual inspection, it should be noted that some of the lysed cells are partially visible whereas the others are visible in full . Figure also presents the variations in concentration of E. coli bacterium (OD 600 ) for different sample formulations at different residences times 5, 10, 15, 20, 25, and 30 min in the microfluidic chip, From Fig. , the (MEO/C NE) have a high potential to inhibit the growth of bacterial cells compared to clindamycin and MEO alone, and after 30 min of incubation within the microfluidic chip at a concentration of 62.0 µg/mL, nearly all bacteria were suppressed. On the other hand, comparing Fig, 5; Table it is worth noting that the bacteria were killed in 24 h in the conventional method (Table ), however, based on the results obtained from Fig. it is obvious that after 30 min of incubation, the bacteria are killed demonstrating a more effective way for bactericidal effect. Further insight into the bactericidal properties of emulsions was gained by intracellular component release assays. Since disruption of cell membrane integrity is always associated with the leakage of intracellular substances, the amount of intracellular leakage into the medium has been monitored , . By measuring the release of proteins and nucleic acids after a 30-minute residence period in the microfluidic chip and using the standard approach (MIC concentration), the antibacterial activity of samples (Table S2) against the E. coli strain was evaluated (Fig S6). It is evident from the results in Table S2, that the (MEO/C NE) has higher absorption values as compared to the MEO and clindamycin alone. Fig. S6 presents the release of protein and nucleic acid for samples case study after 6 h by the conventional method at MIC concentration. According to Fig. S6, the highest leakage in terms of OD 260 nm and OD 280 nm for E. coli were recorded as 0.815 and 0.866. The obtained results show that when comparing the microfluidic chip to traditional approaches, there are noticeable differences, possibly due to the high surface area to volume ratio effect, which leads to the generation of surface forces such as surface tension (a dominant force in the microfluidic chip). Time kill studies Dynamic time-kill measurements were conducted to evaluate the bactericidal properties of active compounds (MEO and clindamycin) and nanoemulsion (MEO/C NE) toward the growth of E. coli in the conventional method and microfluidic chip, depicted in Fig. a, b, respectively. As can be seen from Fig. a the pure active compounds have decreasing trends towards bacterial inhibition, although they show less inhibitory effects than the (MEO/C NE)). It must be noted that the different physicochemical properties associated with MEO and clindamycin result in different action. According to Fig. b similar trends in CFU reduction were observed by employing a microfluidic chip. However, there was a considerable difference between residence time in which 30 min is needed using a microchip rather than the conventional method (3 h) for complete bacterial death. Other researchers also suggested that decreasing the particle size leads to an improvement in the antibacterial activity of nano-sized compounds, penetrating the microbial cell wall – . The authors suggested that the smaller droplet diameters of the nanoemulsions, led to an increase in the bacterial cell surface collision, thus enhancing the antibacterial effect. It has been also showed that the droplet size had a correlation to the antimicrobial efficacy of nanoemulsions. Scanning electron microscopy (SEM) Utilizing an SEM test, the morphological alterations in E. coli bacteria cells treated with MEO, clindamycin, and MEO/C NE at a concentration of 62.0 µg/mL after 20 min were examined. The shape of an E. coli cell, as seen in Fig. a, demonstrated that untreated bacteria have a mostly intact cytoarchitecture, which includes a smooth cell wall or plasma membrane envelope. As can be recognized from Fig. b–d the structural changes and damages to the E. coli wall happen when treated with MEO, clindamycin, and MEO/C NE at 20 min. The above compound’s antibacterial activity was thought to be caused by an interaction with phospholipids components of the cell membrane, leading to more penetration of antibiotics and also converting the phospholipids to other compounds including phosphoric acid and glycerol. This conversion causes the bacterial cell membrane to disrupt and leak the constituents because the phospholipid layer cannot maintain the shape of the cell membrane anymore and bacterial lysis or cytolysis occurs , . Other proposed mechanisms of antimicrobial action include inhibition of the efflux pumps known to cause antibiotic resistance, disruption of ATP imbalances that alter cellular activity through energy intermediates, protein synthesis, and quorum detection . A subclass of RSM, Box Behnken Design (BBD), was used to determine the most appropriate nanoemulsion droplet size targeting the highest stability and biological response. The droplet size of the nanoemulsion was calculated using the DLS procedure for the combination of the independent parameters, which includes the percentages of surfactant (A), EO (B), and (C) clindamycin given in Table . The mean droplet size was calculated as the average of three measurements. The BBD comprises a total of 17 proposed experimental runs, including five repeatable center points (Table ). The suggested correlations are shown in Eqs. and as functions of the independent variables for the droplet size, both in real and coded values. 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\text{Droplet size of nanoemulsion }}({\text{Actual}})\,= & \,{\text{Mean droplet size }}= - {\text{1}}0{\text{4}}.{\text{94593}}+{\text{ 67}}.{\text{45111 }} \times {\text{ Surfactant}}\,+\,{\text{3}}0.{\text{91194}}~ \\ & \times {\text{ Essential Oil}}\,+\,{\text{17}}.{\text{63519 }} \times {\text{ Clindamycin}} - \,{\text{5}}.{\text{925}}00 \times {\text{ Surfactant }} \\ & \times {\text{ Essential Oil}}\,+\,{\text{5}}.{\text{38889}} \times {\text{ Surfactant }} \times {\text{ Clindamycin}}\,+\,{\text{11}}.{\text{55556}} \\ & \times {\text{ Essential Oil }} \times {\text{ Clindamycin}} - \,{\text{7}}.{\text{23}}000 \times {\text{ Surfactan}}{{\text{t}}^{\text{2}}}\,+\,0.{\text{645}}00{\text{ }} \\ & \times {\text{ Essential Oi}}{{\text{l}}^{\text{2}}} - \,{\text{41}}.{\text{25926}} \times {\text{Clindamyci}}{{\text{n}}^{\text{2}}} \\ \end{aligned}$$\end{document} 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} {\text{Droplet size of nanoemulsion }}({\text{Coded}})\,= & \,{\text{Mean droplet size}}\,=\,+\,{\text{1}}0{\text{4}}.{\text{76}}--{\text{5}}.{\text{2}}0 \times {\text{A}}\,+\,{\text{17}}.{\text{44}} \times {\text{B}}\,+\,{\text{12}}.{\text{81}} \times {\text{C}} - \,{\text{5}}.{\text{93}} \times {\text{A}} \\ & \times {\text{B}}\,+\,{\text{2}}.{\text{43}} \times {\text{A}} \times {\text{C}}\,+\,{\text{5}}.{\text{2}}0 \times {\text{B}} \times {\text{C }} - \,{\text{7}}.{\text{23}} \times {{\text{A}}^{\text{2}}}\,+\,0.{\text{65}} \times {{\text{B}}^{\text{2}}} - \,{\text{8}}.{\text{36}} \times {{\text{C}}^{\text{2}}} \\ \end{aligned}$$\end{document} Table displays the ANOVA statistical analysis for the regression model. From Table , the low p-value ( p < 0.0002) and /or high F-value ( F = 24.24) p- confirm the statistical significance of the predicted model. Also, the Lack of Fit index is not significant (0.1309) relative to the pure error indicating the model fits well and the independent variable has considerable effects on the response. The other statistics of the model like the coefficient of determination ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:{R}^{2}$$\end{document} ) was 0.9689, Adjusted R 2 = 0.9289, adequate precision = 17.766, and predicted R 2 = 0.6274, standard deviation = 4.68, coefficient of variation (C.V%) = 4.79, suggesting a strong connection between experimental and projected values. As a result, the resulting model is sufficient to draw a response surface curve and predict stability conditions for maximizing the stability values. By extension, Fig. a. displays a plot of the actual outcomes versus what was anticipated. It helps identify a value or set of data that the regression model is unable to correctly forecast. These graphs demonstrate how well the expected values match the experimental data. The residual plots versus the fitted responses, or a curve of the residuals against the expected response values, are also displayed in Fig. b. Plotting should continue with random scatter, that is, a fixed range of residuals throughout the graph. The findings collected validate the proposed model’s correctness. The discrepancy between the actual information and the expected response is shown in terms of the ordinary probability plot of residuals in Fig. c. This indicates whether the data points must follow a straight line for the residuals to mimic a normal distribution (Fig. c). The points follow a straight line that is surrounded by a normal distribution, as seen in Fig. c. The ultimate droplet size is significantly influenced by the concentration of the constituents that make up the nanoemulsion. High surfactant concentrations have been suggested to raise the risk of toxicity in food , . Herein, the EO, clindamycin, and surfactant percentages were altered to scrutinize the suitable NEs-based clindamycin droplet size. The quadratic polynomial models’ 3D response surface contour plots, which depict the relationships between interacting independent and dependent variables, are displayed in Fig. . Figure a illustrates the mutual effect of surfactant and essential oil percentages on the droplet size of the MEO/C NE. In Fig. a, a significant interaction between the surfactant and essential oil variables is considered. At the lower constant value of surfactant (3 to 4 w/w%) increasing the essential oil leads to a significant increase in the droplet size, while when surfactant is fixed at a higher value (4.5 to 5 w/w%) there is no remarkable change in mean droplet size. In addition to reducing the droplet size, increasing the essential oil concentration is the main goal of this study. Concentrations can be moderated to see the noticeable effects of essential oils. In this case, the droplet size will also be moderate. The ideal situation occurs when the essential oil and surfactant amounts are in the middle of the curve because a lower surfactant proportion is advantageous. These findings corroborate those of who proposed that an increase in surfactant % causes a decrease in the droplet size of cinnamon oil nanoemulsion. Figure b shows the impact of clindamycin concentration and surfactant on the MEO/C NE droplet size. As observed, with the simultaneous increase in clindamycin concentration at constant surfactant concentration, the droplet size increased significantly, which could refer to the clindamycin droplet size. Also, increasing the surfactant concentration to a specific amount leads to a larger droplet size until a peak is reached, then with more increasing concentration the droplet size can be reduced. Surfactants typically lower the free energy required for the generation of nanoemulsions. By lowering the surface tension at the water-oil interface, surfactants which are made up of both hydrophilic and hydrophobic components interlink the space between the aqueous and oil phases , . This implies that a decrease in the nanoemulsion droplet size could result from raising the Tween 80 content. Figure c shows the interaction of essential oil and clindamycin on the mean droplet size of MEO/C NE. From Fig. c the saddle-type curve implies that an increase in both clindamycin and essential oil concentrations leads to larger droplet size. The computed p-values in Table confirm that the influence of the independent variable is statistically significant. The regression model was employed to attain the optimal values for the selected factors. Three various proposed cases by DOE software are re-performed to verify the accuracy of the fitted correlation (Table ). It was shown that both experimental and predicted values are in close agreement, approving that the fitted model is reliable. The DLS plots associated with the mean particle diameter of the nanoemulsions are 75.46 ± 3.2 nm, 73.62 ± 4.4 nm, and 99.35 ± 2.6 nm illustrated in Fig. S2a, b, and c. To assess the encapsulation efficacy percentage (EE%) of clindamycin in nanoparticles, a colorimetric method using ferric acid ion as a colorimetric agent has been utilized . Briefly, stock solutions of clindamycin and ferric ion were prepared, as described in supporting information. Then, to plot a calibration curve, clindamycin-ferric ion standards ranging from 10 to 250 µg/mL were prepared, and the UV absorption in 590 was evaluated (Fig. S4). The prepared nanoemulsion was separated into two distinct phases (organic and aqueous) by adding KCl (Fig. S5) as described by Klaus et al. . By adding the ferric chloride solution to the organic phase and upon color change, the absorption was evaluated at 590 nm and the concentration was calculated. The encapsulation EE% was calculated resulting in 59.63% W/W. After three months of storage at 4 °C, it was demonstrated that the mean particle diameter had not significantly changed, indicating the long-term storage stability (Fig. ). Following three months of storage, the produced nanoemulsions’ polydispersity index (PDI) was determined to be 0.185, 0.183, and 0.210. The morphology of the prepared nanoemulsion at optimum condition No. 1 is shown in Fig. a, which was obtained using the Transmission electron microscopy (TEM) (Philips-CM30) instrument. The samples’ droplet sizes were spherical and consistent. In addition to Fig. b, number frequency histograms display the nanoemulsion’s particle size distribution for optimal condition No. 1 with a droplet size of 64.7 nm on a linear scale. The zeta potential of the prepared nanoemulsions (No 1, 2, and 3) was calculated − 16, − 15.4, and − 15.3 mV respectively (Fig. S3). The MIC and MBC tests for antibacterial activities of prepared samples (Table ) were evaluated against the E. coli bacterium. The MIC values for (MEO/C NE1), (MEO/C NE2), (MEO/C NE3) exhibited that the optimized formulations are in the same order of magnitude, 0.0195, 0.0390 and 0.0195 mg/mL Comparing MEO NE indicates the effect of nanoemulsification to improve antibacterial activity. However, as can be observed, these values are considerably lower compared to the pure MEO and clindamycin, indicating an enhanced inhibitory effect, which can be ascribed to the impact of the nanoemulsion system as well as active compounds. Other biological studies were conducted after the compound (MEO/C NE1) was determined to be the best experimental formulation based on the obtained data, which showed no discernible variation in the MIC of the optimal points. Other reports, confirm that the antibacterial activity is related to the impact of physicochemical characteristics of essential oil NEs and particle size , . It has also been reported that as the droplet size decreases, the electrostatic interaction with the bacterial membrane increases, thereby producing stronger antibacterial potency . From Table , similar trends were observed for MBC values using different test samples. Similar results were reported for ginger oil emulsions and, lemongrass oil emulsions . By binding to the lipid and protein components of the bacterial membrane, nanoemulsions penetrate cell walls and cytoplasmic membranes, resulting in cytolytic and intracellular toxic effects. In other words, the possible mechanism underlying with antibacterial compounds of EO is that they react with the phospholipid components of the cell wall converting them into other compounds such as glycerol and phosphoric acid. Due to this conversion, phospholipid layer can no longer maintain the shape of the cell membrane, consequently, leakage occurs in the cell membrane of the bacteria. It can be seen morphologically in the ghost cell shape of E. coli bacteria as a consequence of this reaction which gives rise to shrinking in the cell membrane and causes the cell to lysis. This process somewhere is also called spheroplast. In visual inspection, it should be noted that some of the lysed cells are partially visible whereas the others are visible in full . Figure also presents the variations in concentration of E. coli bacterium (OD 600 ) for different sample formulations at different residences times 5, 10, 15, 20, 25, and 30 min in the microfluidic chip, From Fig. , the (MEO/C NE) have a high potential to inhibit the growth of bacterial cells compared to clindamycin and MEO alone, and after 30 min of incubation within the microfluidic chip at a concentration of 62.0 µg/mL, nearly all bacteria were suppressed. On the other hand, comparing Fig, 5; Table it is worth noting that the bacteria were killed in 24 h in the conventional method (Table ), however, based on the results obtained from Fig. it is obvious that after 30 min of incubation, the bacteria are killed demonstrating a more effective way for bactericidal effect. Further insight into the bactericidal properties of emulsions was gained by intracellular component release assays. Since disruption of cell membrane integrity is always associated with the leakage of intracellular substances, the amount of intracellular leakage into the medium has been monitored , . By measuring the release of proteins and nucleic acids after a 30-minute residence period in the microfluidic chip and using the standard approach (MIC concentration), the antibacterial activity of samples (Table S2) against the E. coli strain was evaluated (Fig S6). It is evident from the results in Table S2, that the (MEO/C NE) has higher absorption values as compared to the MEO and clindamycin alone. Fig. S6 presents the release of protein and nucleic acid for samples case study after 6 h by the conventional method at MIC concentration. According to Fig. S6, the highest leakage in terms of OD 260 nm and OD 280 nm for E. coli were recorded as 0.815 and 0.866. The obtained results show that when comparing the microfluidic chip to traditional approaches, there are noticeable differences, possibly due to the high surface area to volume ratio effect, which leads to the generation of surface forces such as surface tension (a dominant force in the microfluidic chip). Dynamic time-kill measurements were conducted to evaluate the bactericidal properties of active compounds (MEO and clindamycin) and nanoemulsion (MEO/C NE) toward the growth of E. coli in the conventional method and microfluidic chip, depicted in Fig. a, b, respectively. As can be seen from Fig. a the pure active compounds have decreasing trends towards bacterial inhibition, although they show less inhibitory effects than the (MEO/C NE)). It must be noted that the different physicochemical properties associated with MEO and clindamycin result in different action. According to Fig. b similar trends in CFU reduction were observed by employing a microfluidic chip. However, there was a considerable difference between residence time in which 30 min is needed using a microchip rather than the conventional method (3 h) for complete bacterial death. Other researchers also suggested that decreasing the particle size leads to an improvement in the antibacterial activity of nano-sized compounds, penetrating the microbial cell wall – . The authors suggested that the smaller droplet diameters of the nanoemulsions, led to an increase in the bacterial cell surface collision, thus enhancing the antibacterial effect. It has been also showed that the droplet size had a correlation to the antimicrobial efficacy of nanoemulsions. Utilizing an SEM test, the morphological alterations in E. coli bacteria cells treated with MEO, clindamycin, and MEO/C NE at a concentration of 62.0 µg/mL after 20 min were examined. The shape of an E. coli cell, as seen in Fig. a, demonstrated that untreated bacteria have a mostly intact cytoarchitecture, which includes a smooth cell wall or plasma membrane envelope. As can be recognized from Fig. b–d the structural changes and damages to the E. coli wall happen when treated with MEO, clindamycin, and MEO/C NE at 20 min. The above compound’s antibacterial activity was thought to be caused by an interaction with phospholipids components of the cell membrane, leading to more penetration of antibiotics and also converting the phospholipids to other compounds including phosphoric acid and glycerol. This conversion causes the bacterial cell membrane to disrupt and leak the constituents because the phospholipid layer cannot maintain the shape of the cell membrane anymore and bacterial lysis or cytolysis occurs , . Other proposed mechanisms of antimicrobial action include inhibition of the efflux pumps known to cause antibiotic resistance, disruption of ATP imbalances that alter cellular activity through energy intermediates, protein synthesis, and quorum detection . Natural compounds in conjunction with novel approaches like nanotechnology and microfluidic devices can provide new approaches to combat bacterial resistance. The increase in contact surface by nanoemulsion and microfluidic chips as well as the use of active compounds are critical parameters for inhibiting bacterial activity. Using conventional and highly efficient microfluidic techniques, the interaction of optimized MEO/C NE formulations with E. coli bacteria was studied. Natural active compounds interact with bacterial phospholipid bilayer membranes, resulting in cell lysis. MEO/C NE and natural bioactive compound demonstrated highly improved performance for bacterial cell destruction and active penetration than MEO and clindamycin alone. Remarkably, a significantly greater amount ( P < 0.05) of internal substances was released from bacterial cell membranes upon nanoemulsion treatment compared to the pure active compounds. Employing microfluidic chips showed superior efficiency in terms of low amount of sample, shorter residence time, and higher release of cytoplasmic materials compared to the conventional method. It was found that providing more efficient contact between the two phases will result in more destruction of the bacterial cell membrane. The current study highlights the potential to enhance the natural compounds containing nanoemulsions antibacterial activity by integrating microfluidic chips as an optimistic, ingenious and economical technology. Below is the link to the electronic supplementary material. Supplementary Material 1 |
A commemoration of the “digital” side of Juan Rosai: a junior’s perspective of the legacy of an all-round pathologist | 1e78bdf5-d09e-4bee-9a13-21d278340b1f | 8720403 | Pathology[mh] | When Bethany and Filippo asked me to write down a few lines in commemoration of Juan Rosai, underlining his role in the field of digital pathology, the first thought that arose in my mind was ‘Who is or who was Juan Rosai’? As a first-year pathology resident, Rosai’s name was especially familiar because of the surgical pathology book bearing his name that I have seen in regular use in different institutions. But apart from the fame of this work, I found myself knowing very little concerning his career and the professional achievements that must have led to his writing of an almost singled-authored masterpiece. To confront my question, I turned to the internet. After doing some digging, the information I found started depicting Rosai as an authentic all-round professional. At a first and superficial sight, he was described as a firm defender of conventional H&E slides and a convinced believer in morphology. He apparently played a role not only as a diagnostician but also as a researcher, a consultant and a teacher, distinguishing himself as a real icon at all levels of modern pathology. Furthermore, among his multifaceted contributions to the discipline, he was also overtly acclaimed as a master of surgical pathology, to the point of receiving several flattering nicknames, such as “the Maradona of surgical pathology”. However, going on in my personal quest for Rosai’s true identity, I realized that he was not only focused on H&E and microscopy, but he was also a polyhedric pathologist: he was an innovative promoter of emerging technologies, such as immunohistochemistry, molecular biology and digital pathology. He foresaw their importance and future applications before they proved to be mainstays in the field of modern pathology, and correctly anticipated the main benefits of their wider usage in both his consultation work and in the general practice of surgical pathology. All the more surprisingly, he did all this while remaining a staunch supporter of the continuous role of morphology in standard diagnostic practice . He probably was the very first one to understand that all these innovations, leading to evolving subspecialties of their own and including digital pathology itself, are per se pathology. During his long career as a consultant, Rosai had the chance to witness the steady technical advances that improved the quality and accessibility of digital pathology, starting from the first examples of ‘static’ digital images to the more up to date ‘dynamic’ whole slide imaging . He understood the value of digital pathology and tried to communicate its fundamental pros to the wider scientific community. He was among the first to promote digital pathology as the key to facilitate second opinion consultations, thanks to an easier and faster sharing of digital images, rather than physical glass slides, among geographically distant pathologists . He understood the benefit of being able to carry out reproducible measurements directly on the digital slide, such as tumor width or depth of invasion, and to objectively quantify positive cells during immunohistochemical examinations. He also praised digital pathology for a number of other secondary features, such as the opportunity to manipulate the digital slide and add annotations, and the chance to examine the material at magnifications not easily attainable with traditional microscopy. I was truly fascinated by reading his wholehearted support of digital pathology in an email he sent to the FDA, which clearly showed how strong Rosai’s advocacy of this new discipline was . Lastly, he understood the potential of being able to archive countless digital slides within servers rather than in conventional storage rooms, a feature that proved to be the basis for the creation of his own ‘Rosai Digital Collection’ ( https://www.rosaicollection.org/index.cfm ). For someone as young as me, the sole existence of this collection appears exciting and incredibly stimulating. It helps you realize how vast pathology is as a discipline and grants you the chance to have a look at slides that you would hardly ever see in routine work and, probably, in one’s entire diagnostic career. All the more excitingly, thanks to the digital nature of such a collection, all the material Rosai collected and commented is made freely available to all, from the young trainees to the more seasoned diagnosticians, regardless of geographical location. Without any doubt, all this underlines Rosai’s forerunning openness to the promising educational role digital pathology has to offer. From 2000 to 2005, Rosai moved back to Italy to serve as Chairman of the Pathology Department of the National Cancer Center in Milan. From 2005 onwards, he created and became the director of the International Center for Pathology Consultations of the Italian Diagnostic Center in Milan, with the core aim of providing surgical pathology consultations through digital telepathology for pathologists, clinicians and patients both in Italy and overseas. Going back to the starting question of this journey, Rosai is the surgical pathologist in the truest meaning of the word, embracing all the technologies, from H&E to the digital, to render a diagnosis valuable for both the clinician and the patient. What I imagine now is Juan Rosai rendering diagnoses using digital slides with all their associated benefits, including the AI. Although it is difficult to imagine an AI minimally close to Rosai’s talent as a diagnostician, in all likelihood Rosai himself would have encouraged to pursue further research in the field to create better-performing AI tools and to promote their wider usage by all. So, given Rosai’s strong support to digital pathology, what are we waiting for to embrace and follow his ideas in this field? The following statement by Rosai should erase any lingering doubts and encourage to move on to a fully digital approach . “I would simply conclude by saying than from a technical and scientific standpoint I am thoroughly convinced that a diagnosis made on the basis of a well-prepared digital image of a representative whole section is just as informative and accurate as that performed by using the time-honored examination of a glass slide under the binocular microscope.” “Such an important matter […] I have no doubts will revolutionize the field of pathology, if it is not doing that already.” |
Integrating the milk microbiome signatures in mastitis: milk-omics and functional implications | 126091aa-c719-490c-b96b-e64e034523c2 | 11742929 | Biochemistry[mh] | Mastitis is a frequently occurring polyetiological disease in dairy animals and lactating women. It is generally defined as a localized intramammary infection that often results in the inflammation of the mammary gland (Ali et al. ; Hu et al. ; Ruegg ; Chen et al. ; Zhu et al. ). Globally, mastitis is recognized in the dairy industry as the most devastating and major disease, causing significant economic losses and adversely impacting animal health and productivity as well as milk quality, safety, and yield (Silva et al. ; Hoque et al. ; Wang et al. ; Ruegg ; Alessandri et al. ; Panchal et al. ; Kizil et al. ). Depending on the severity of clinical manifestation and symptoms, mastitis presents either as a clinical or subclinical infection. While both subclinical mastitis (SCM) and clinical mastitis (CM) are characterized by high milk somatic cell counts (Song et al. ; Abed et al. ), the former however originates with varying conspicuous physiological and anatomical alterations, including irritation, inflammation, redness, and swelling of the mammary gland, resulting in changes in milk consistency, color, and yield. SCM usually presents as an inconspicuous infection with limited visible clinical symptoms yet causes significant impacts on the overall lactation performance, host health, and immune functioning (Kaczorowski et al. ; Wang et al. ; Chen et al. ). Due to the limited clinical signs, prolonged latency, and insufficient attention for prompt interventions, the incidence of SCM is substantially higher than CM, generally accounting for considerable (approximately 90%) cases of bovine mastitis (Song et al. ; Wang et al. ). In humans, mastitis is estimated to occur in approximately 10 to 33% of all lactating women, though the reported incidence varies across geographical locations and populations (Pevzner and Dahan ; Wilson et al. ). While SCM in breastfeeding women is usually self-limiting and can resolve through self-management (e.g. breast massaging, cold compresses, etc.), CM requires treatment with antibiotics and cessation of breastfeeding in severe cases (Boix-Amorós et al. ; Wilson et al. ; Ouedraogo et al. ; Kizil et al. ). Mammalian milk is often referred to as the “elixir of life” or “maternal white blood” readily available for consumption by offspring from birth. Being the sole nutrition for offspring from birth, milk is rich in its nutritional composition, essentially consisting of biologically active substances such as growth factors, oligosaccharides, and immunoglobulins as well as essential nutrients, including minerals, proteins, amino acids, vitamins, fats, water, etc. (Williams et al. ; Bruckmaier and Zinn ; Guo et al. ). Historically, milk was generally considered to be sterile unless contaminated either by external sources or due to [systemic] maternal infection (Fernández et al. ; Couvillion et al. ). From a traditional viewpoint, the presence of microorganisms in milk often indicates milk spoilage, mastitis, or a potential threat of the transmission of zoonotic or environmental pathogens to humans (Jones ; Holsinger et al. ; Fernández et al. ). Within the last decades however, numerous studies demonstrated the presence of microorganisms, especially lactic acid bacteria in milk from healthy hosts (Heikkilä and Saris ; Martín et al. ; Jiménez et al. ; Zimmermann et al. ; Togo et al. ; Reuben et al. ; Zheng et al. ; Dehghani Champiri et al. ; Navarré et al. ; Akinyemi et al. ). The increasing use of culture-independent high-throughput sequencing techniques has revealed the existence of diverse microbial communities across a wide range of niches, including body sites and fluids (e.g., milk) which were previously believed to be sterile when healthy (Zeineldin et al. ; Borghi et al. ; Oikonomou et al. ). Beyond the nutritional benefits derived from milk consumption, recent research continues to demonstrate the ubiquitous presence of highly diverse and previously unknown bacterial groups in milk (Murphy et al. ; Moossavi et al. ; Toquet et al. ; Notarbartolo et al. ; Wang et al. ; Singh et al. ; Alessandri et al. ). In comparison with other host and disease-associated microbiomes such as gut, skin, vagina, and respiratory tract, the milk microbiome and its relationship with mastitis are seldom studied together. The increasing understanding of niche-specific host-associated microbiomes and their impact on health has propelled interest in studying the milk microbial community as well as its impact on the health of both adults and offspring. Therefore, unraveling the relationship between milk microbiome and host health can present interesting and novel frontiers for improving infant and maternal health. Through high throughput technologies, it is now possible to profile milk microbial communities and also elucidate their complex metabolic activity, functional potential, and host-microbe interactions. This could undoubtedly provide useful insights and increase current understanding of the relationship between any deviation in host health (e.g., in the case of mastitis) and the diversity and composition of milk microbial community and mediated metabolites. The milk microbiota and metabolites have been recently demonstrated to be the major determinant of milk quality, udder health status, and incidence of mastitis (Wang et al. ; Porcellato et al. ; Winther et al. ; Tarrah et al. ; Neculai-Valeanu and Ariton ; Alessandri et al. ; Jin et al. ). Furthermore, recent studies have shown significant differences in the diversity and composition of milk microbial community and metabolites between CM, SCM, and healthy hosts (Wang et al. , ). Although the milk microbiome is yet to be extensively studied, there is a growing interest in understanding the extent of its dysbiosis in relation to the initiation and progression of mastitis in animals and humans. This review therefore provides significant insight into the possible drivers and sources of milk microbiome and their potential roles in mastitis and milk quality. We also discuss the advantages and challenges of different high throughput “omics” technologies, including metagenomics, metabolomics, metatranscriptomics, lipidomics, and metaproteomics separately and in combination (multi-omics) in elucidating the mechanistic relationship between milk microbiome and mastitis. The information provided will inform future experimental microbiome research and enhance the integration of functional and mechanistic microbiome potential in health and disease. The current ability to characterize the microbial community of milk and to unravel its origins has significantly expanded within the last two decades. This is largely due to the advancement of high throughput omics technologies in profiling microbial communities. Milk microbiota has been hypothesized to originate from both exogenous and endogenous sources, including mammary glands and entero-mammary microbial translation (Addis et al. ; Doyle et al. ; Moossavi et al. ; Williams et al. ; Taponen et al. ; Power et al. ; Dombrowska-Pali et al. ). It has been traditionally believed that the milk microbiota originates exogenously from the surrounding environment, especially the skin of the mammary gland, teat canal, or the cavity of the offspring. However, recent advances in milk microbiota profiling reveal taxonomic groups that could not have originated from exogenous sources (Gueimonde et al. ; Jost et al. ; Pannaraj et al. ; Moossavi and Azad ; Filatava et al. ; Power et al. ; Guo et al. ; Dombrowska-Pali et al. ). Therefore, the external or surrounding environment cannot be solely considered as the source of milk microbiota. The evidence of the endogenous origin of milk microbiota has been supported by different studies involving humans, mice, and ruminants (Perez et al. ; Jiménez et al. ; Young et al. ; de Andrés et al. ; Ma et al. ; Hoque et al. ; Xu et al. ). The endogenous origin of milk microbiota through entero-mammary translocation of microorganisms has garnered considerable attention over the years. This is because microbial communities across ecological niches within the host do not function independently as separate environments. They closely interconnect and interact with each other, forming a network of complex inter-related microbial communities. Consequently, microorganisms from the gut and other body sites may enter the mammary gland and eventually the milk through endogenous routes (Costello et al. ; Ruegg ; Guo et al. ). The existence of entero-mammary pathways and the transfer of microorganisms from the gastrointestinal tract to the mammary glands have been described by several authors (Costello et al. ; Donnet-Hughes et al. ; Fernández et al. ; Jost et al. ; Stinson et al. ). Though not clearly elucidated, the mechanisms regulating microbial translocation across the intestinal barrier to the mammary glands or milk are widely believed to selectively involve immune cells, especially macrophages and intestinal dendritic cells (Martín et al. ; Perez et al. ; Rodríguez et al. ; Selvamani et al. ; Guo et al. ; Dombrowska-Pali et al. ). The intestinal dendritic cells selectively sample gut contents by loosening the tight junctions between intestinal absorptive cells and extending their dendrites to the lumen without compromising the barrier integrity of the intestinal epithelia (Rescigno et al. ; Donnet-Hughes et al. ; Rodríguez et al. ). Because of the sampling activity, the dendritic cells can selectively harbor and transfer live bacteria to the mesenteric lymph nodes which consequentially spread to the lactating mammary gland and other distant mucosal surfaces through the lymphoid system of the mucosa (Donnet-Hughes et al. ; Ferretti et al. ; Ruegg ). More so, active lactation causes the migration of cells from the intestinal lymphoid tissues to the mammary gland through peripheral and lymphatic blood circulations (Ruegg ). The presence of bacteria and their genetic material have been previously reported in human peripheral blood mononuclear cells and breast milk cells during lactation (Rodríguez ; Ferretti et al. ; Rodríguez et al. ). For instance, strains of lactic acid bacteria orally administered to mice and rats during pregnancy and lactation were equally detected in milk and mammary tissues of the treatment group but not in the untreated group (control) (de Andrés et al. ; Azagra-Boronat et al. , ; Selvamani et al. ). Similarly, Lactobacillus gasseri CECT5714, Ligilactobacillus salivarius CECT5713, and Limosilactobacillus fermentum CECT5716 were detected in breast milk of lactating mothers following oral consumption of specific probiotics containing the same microbial strains (Derakhshani et al. ; Stinson et al. ). Although further research is required to fully unravel the underlying mechanisms of entero-mammary translocation of microorganisms, however, these findings suggest strong evidence of the transfer of microorganisms from the gut to the mammary glands and eventually milk. In support of the endogenous origin of milk microbiota, additional studies (Young et al. ; Metzger et al. ; Jiang et al. ) have detected rumen microbiota and genetic material in bovine milk, hence suggesting the rumen-mammary pathway in ruminants. Similarly, bovine milk shares a physiological resemblance with rumen contents in terms of physicochemical composition (e.g. temperature and pH) and rich nutritional composition that support microbial growth (Priyashantha et al. ; Souza et al. ; Guo et al. ). Interestingly, these compelling similarities plausibly support the interconnectedness and crosstalk between the microbiota of rumen and bovine milk. The detection of certain rumen anaerobic microorganisms, including Bifidobacterium spp, Ruminococcus spp, and members of Peptostreptococcaceae family in bovine milk of healthy lactating cows (Young et al. ; Jiang et al. ) further supports the rumen-mammary hypothesis. The presence of these obligate anaerobic microorganisms in bovine milk suggests that the surrounding environment and exogenous sources cannot be considered the sole origin of milk microbiota (Lima et al. ; Taponen et al. ). The retrograde flow of infant oral microorganisms into the breast and mammary ducts of lactating mothers as documented in human studies (Murphy et al. ; Ferretti et al. ; Moossavi et al. ; Williams et al. ; Fehr et al. ; Ames et al. ) has been hypothesized as another likely source of maternal milk microbiota. The microbiome composition and structure of infants’ oral cavities are notably similar to that of maternal milk (Biagi et al. ; Avershina et al. ; Williams et al. ; Couvillion et al. ). Supporting this hypothesis, several studies have reported the similarity (especially the dominance of Streptococcus spp) between the maternal milk microbiome and that of the infant oral cavity (Hunt et al. ; Cephas et al. ; Fernández et al. ; Williams et al. ; Nardi et al. ; Arishi et al. ). The infant oral microbiota has been estimated to contribute to about 21% and 66% of maternal milk microbiota at day 2 and 5 months of age (Williams et al. ). Undoubtedly also, the maternal milk microbiome plays a vital role in colonizing and shaping the microbiome of the infants’ oral cavity (Williams et al. ; Ruiz et al. ). The retrograde pathway is rarely documented in dairy cows and other animals. This is partly because most dairy farms often restrict the interactions between the cows and their calves shortly after birth, thereby limiting microbial interactions through the suckling of maternal milk. In addition to the retrograde flow of microorganisms from the infant oral cavity to the mammary ducts, breast skin microbiota may also contribute to maternal milk microbial composition. Milk contains species of Staphylococcus, Corynebacterium , and Propionibacterium which are notable inhabitants of adult skin including the sebaceous breast skin (Latuga et al. ; Oh et al. ; Jiménez et al. ; Oikonomou et al. ; Nardi et al. ). Human milk microbiota The human milk microbiota (HMM) is highly diverse and complex, consisting of over 800 species of bacteria with the majority being obligate aerobic or facultative anaerobic bacteria (Togo et al. ; Lyons et al. ; Notarbartolo et al. ; Ajeeb et al. ; Power et al. ; Dombrowska-Pali et al. ). The presence of these anaerobic bacteria in HMM is known to beneficially impact infants’ health and well-being (Lyons et al. ; Kashyap and Choudhari ; Dombrowska-Pali et al. ). Over the years, several studies have characterized the HMM using both the culture-dependent and culture-independent approaches with the latter widely used in recent years. In a systematic analysis comprising 15,489 milk samples from 11,124 women across 38 countries, 820 bacterial species that belonged to 178 genera, 92 families, 52 orders, 24 classes, and 13 phyla were identified from human milk (Togo et al. ). While some phyla (e.g. Fusobacteria, Deferribacterota, Cyanobacteria) have relatively lower abundance, commonly identified genera in milk from healthy women include Staphylococcus , Streptococcus , Corynebacterium , Pseudomonas , Serratia , Propionibacterium , Bradyrhizobium, Sphingomonas, Ralstonia , Cutibacterium , Enterococcus , Lacticaseibacillus , Lactiplantibacillus, Limosilactobacillus, Lactococcus , Lactobacillus , Leuconostoc , Bifidobacterium , and Weissella and other taxonomically-related Gram-positive bacteria (Togo et al. ; Fernández et al. ; Ajeeb et al. ). Although previous studies involving multiple countries demonstrated varied HMM across geographical locations, however, consistent and universal members of the HMM were identified as the core genera in all the samples analyzed (Fitzstevens et al. ; Lackey et al. ). Increasing reports have demonstrated the HMM to contain organized consortia and networks of bacteria that are often stable in structure, diversity, and abundance throughout the lactation period (Sam Ma et al. ; Drago et al. ; Fernández et al. ; Holdsworth et al. ). Regardless of the maternal body mass index (BMI), health, diet, demographics, and geography, four dominant phyla including Bacteroidetes, Proteobacteria, Firmicutes, and Actinobacteria are usually identified across human milk samples (Togo et al. ; Fernández et al. ; Notarbartolo et al. ; Banić et al. ; Dinleyici et al. ; Wang et al. ; Ajeeb et al. ). Among these dominant phyla, previous research has documented a nine-genera core bacteria that constitute HMM, comprising Staphylococcus , Streptococcus , Corynebacterium , Pseudomonas , Serratia , Propionibacterium , Bradyrhizobium, Sphingomonas and Ralstonia (Demmelmair et al. ; Moubareck ; Notarbartolo et al. ; Wang et al. ; Dinleyici et al. ; Dombrowska-Pali et al. ). Interestingly, the core bacteria in human milk represent about half of the HMM. However, their relative abundance appears to be variable among milk samples, geography, experimental techniques, and analysis (Diez-Sampedro et al. ; Moubareck ; Cheema et al. ). In addition, potential mother-to-infant microbial transmission through breastfeeding, as is the case with S. aureus, which can also colonize the infant intestine, has been previously shown (Benito et al. ). Bovine milk microbiota The last two decades have witnessed a substantial increase in research exploring the entire bovine milk microbiota (BVM) rather than specific milk-borne pathogens. Several comparative studies have consistently reported the differences in the composition and structure of BVM in both healthy and diseased (mastitis) cows (Derakhshani et al. ; Hoque et al. , ; Couvillion et al. ; Khasapane et al. ; Yang et al. ; Power et al. ; Guo et al. ; Salman et al. ). The complexity of the BVM shows that bovine milk contains a highly abundant, complex, and diverse microbial community. Similar to the HMM, the BVM harbors core phyla and genera that are considerably conserved and consistently appear in at least 95% of all bovine milk samples regardless of dietary, environmental, and individual variations in cows (Astudillo-García et al. ; Moossavi et al. ; Ryu et al. ; Guo et al. ). While some studies have reported inconsistencies in the compositions of the BVM across individuals and geographical locations, others have shown relative stability in core microbial groups as well as their overall metabolic/physiological properties and functionalities (Mizrahi et al. ; Guo et al. ). A recent study indicated the presence of 119 bacterial species from 202 genera, 124 families, 82 orders, 33 classes, and 95 phyla from 166 composite milk samples obtained from 166 individual dairy cattle in South Africa (Khasapane et al. ). Notably, four core-phyla including Proteobacteria, Firmicutes, Bacteroidota, and Actinobacteria were present in over 97% of the total samples evaluated. In a previous study comprising 112 milk samples from individual cows from 10 different farms in the Shanghai region of China, 33 phyla and 785 genera were detected (Li et al. ). The core bacterial groups identified included four phyla [Bacteroidetes (7.47%), Actinobacteria (9.40%), Proteobacteria (39.0%), and Firmicutes (40.8%)] and four genera [ Acinetobacter (10.2%), Lactococcus (11.7%), Bacillus (13.8%), and Pseudomonas (19.6%)] in all the samples. Similarly, several recent studies separately revealed the presence of the four core-phyla from milk samples from dairy cattle in Ireland, Pakistan, Turkey, Japan, China, Italy, Bangladesh, Korea (Hoque et al. ; Ryu et al. ; Kizil et al. ; Yang et al. ; AoDaohu et al. ; Yap et al. ; Salman et al. ). The core BVM is generally believed to consist of Bacteroides , Staphylococcus , Lactobacillus , Propionibacterium, Enterococcus , Streptococcus , Lactococcus , Porphyromonas , Corynebacterium , Fusobacterium, and Pseudomonas (Addis et al. ; Hoque et al. ; Oikonomou et al. ; Porcellato et al. ; Power et al. ; Guo et al. ). Interestingly, some of these genera are often associated with healthier udder-quarters in cows (Addis et al. ). Small ruminants’ milk microbiota The structure and composition of the milk microbiota of small ruminants are highly variable probably due to the limited number of studies as well as several environmental and individual species/breed-specific factors. Milk microbiota of small ruminants such as goat, sheep, reindeer, and water deer show significant differences, thus suggesting environmental influences, host-associated factors, and host-microbial adaptation as major drivers of microbial composition and structure (Li et al. ; Oikonomou et al. ; Polveiro et al. ; Guo et al. ; Hoving-Bolink et al. ). So far, there has been no consensus on the overall core microbial phyla or genera in the milk of small ruminants across various species or breeds. Out of 31 and 43 phyla identified from the milk samples of 212 Spanish Churra sheep breed and 50 Assaf ewes, Actinobacteria, Bacteroidetes, Firmicutes, and Proteobacteria accounted for 97.4% and 90.07% of all the samples examined (Esteban-Blanco et al. , ). While Actinobacteria, Firmicutes, and Proteobacteria were reported as the core phyla in the milk microbiota in sheep (Esteban-Blanco et al. , ), the presence of other recurring phyla, especially Bacteroidetes, Acidobacteria, Cyanobacteria, Fusobacteria, have been documented as well (Castro et al. ; Esteban-Blanco et al. , ). Although over 1000 genera were identified from sheep milk, studies have shown Corynebacterium , Lactobacillus , Staphylococcus , Streptococcus, and Escherichia/Shigella to be the core microbiota of milk from healthy sheep (Castro et al. ; Esteban-Blanco et al. , ; Toquet et al. ). However, host-related factors such as breed as well as geographic locations have been suggested to impact milk microbiota composition in sheep (Castro et al. ; Esteban-Blanco et al. ). Goat milk microbiota is reported to primarily contain Firmicutes and Proteobacteria as the core phyla and to a minor extent, Actinobacteria (Li et al. ; Zhang et al. ; Niyazbekova et al. ; Polveiro et al. ; Lauková et al. ; Hoving-Bolink et al. ). These phyla usually constitute about 90% of the total bacterial phyla in milk from healthy goats. Furthermore, studies have shown phylum-level variations in goat milk microbiota composition during the lactation period (McInnis et al. ; Zhang et al. ; Niyazbekova et al. ). In a recent study, Hoving-Bolink et al. identified Lactococcus , Staphylococcus , Pseudomonas , Acinetobacteria , Corinebacteria , and Microbacteria as the core genera in healthy goat milk. Polveiro et al. reported the presence of Staphylococcus spp, Brevidabacterium spp, Enterococcus , and Bacteroides spp in all the milk samples examined, including healthy goats and goats diagnosed with clinical, subclinical, and gangrenous mastitis. Whereas Curtobacterium , Staphylococcus , and Bifidobacterium were reported as the core genera in milk samples from healthy goats across farms in central and eastern Slovakia, the presence of Enterococcus , Lactococcus , Streptococcus , Lacticaseibacillus , and Lactobacillus was also prevalent (Lauková et al. ). Factors including animal breed, sample origin, farm location, and management appear as the key drivers in goat milk microbial composition and structure. Mastitis: dysbiosis of milk microbiota Intramammary infections resulting in mastitis constitute a common disease in mammalian species globally. Mastitis usually causes a significant decrease in milk production, dysbiosis of the milk microbiota, undesired weaning, premature culling, difficulty in conception, and treatment costs in both humans and animals (Wolfenson et al. ; Boix-Amorós et al. ; Fernández et al. ; Wang et al. ; Borş et al. ; Ito et al. ; Crippa et al. ). Mastitis affects approximately 10 to 33% of all lactating women resulting in severe public health problems in both infants and mothers (Pevzner and Dahan ; Wilson et al. ). In animals especially cattle, up to 15% decrease in milk production, overall animal well-being, and behavioral changes due to mastitis have been well-documented (Addis et al. ; Toquet et al. ; Morales-Ubaldo et al. ). Mastitis is often characterized by the dysbiosis of the milk (and mammary) microbiota, depending on the clinical manifestations (clinical or subclinical) or the course (acute, granulomatous, and subacute) (Angelopoulou et al. ; Demmelmair et al. ; Fernández et al. ; Dobrut et al. ). The analysis of mastitis milk from humans and animals provides new insights into the extent of milk microbiota perturbations as well as the ecology of mastitis-associated etiologies. As previously mentioned, milk from healthy hosts contains highly diverse bacteria with the majority regarded as nonpathogenic and often unrelated to mastitis. The role of these bacterial groups in the initiation, progression, and prevention of mastitis is not fully elucidated. However, emerging evidence showed the potential impact of the milk microbiota in the development of mastitis (Hoque et al. ; Ito et al. ; Yang et al. ; AoDaohu et al. ; Guo et al. ; Salman et al. ). Milk from mastitis-suffering animals and women shows distinct microbiota composition and structure when compared to healthy hosts (Mediano et al. ; Hoque et al. ; Selma-Royo et al. ; Toquet et al. ; Kizil et al. ; Yang et al. ; Salman et al. ). The milk microbiota from acute and subacute mastitis is often distinct in both structure and composition with an abundant presence of aerotolerant bacteria, especially Staphylococcus , and significantly reduced diversity and beneficial obligate anaerobes, including Faecalibacterium , Eubacterium, Ruminococcus , etc. (Patel et al. ; Derakhshani et al. ; Esteban-Blanco et al. ; Boix-Amorós et al. ). In a study consisting of 1849 milk samples from individual lactating women with mastitis (acute/subacute), 91.56% and 29.74% of the milk examined revealed the presence of Staphylococcus epidermidis and Staphylococcus aureus (Mediano et al. ). Additionally, streptococci (70.20%) and corynebacteria (16.60%) constituted the dominant microbial groups from the milk analyzed. The presence of S. epidermidis was previously reported in 85% of milk from women with mastitis (Delgado et al. ). While Staphylococcus is the core genus associated with acute and subacute mastitis, S. aureus and S. epidermidis are the common staphylococci frequently isolated from mastitis milk (Jiménez et al. ; Boix-Amorós et al. ). Although Pseudomonas, Klebsiella, Serratia, Ralstonia, Aeromonas, and Enterococcus are other enriched and frequently isolated genera from mastitis milk, Clostridium, Ruminococcus, Faecalibacterium, Acinetobacter, and Eubacterium are consistently depleted in milk samples of subacute and acute mastitis (Patel et al. ; Angelopoulou et al. ; Hoque et al. ). The lower microbial diversity characterizing mastitis milk microbiota consequentially increases the presence of opportunistic pathogens Escherichia coli , Bacillus subtilis, B. cereus, E. faecalis, S. epidermidis, S. hominis, and K. pneumoniae. In comparison to milk from healthy individuals, mastitis milk contains a significantly higher presence of Staphylococcaceae, Brucellaceae, Burkholderiaceae, Streptococcaceae, and Aeromonadaceae at the family level as well as a higher presence of Staphylococcus, Streptococcus, Ralstonia, Klebsiella, Aeromonas, Leptospira, and Proteus at the genus level (Boix-Amorós et al. ; Ito et al. ; Khasapane et al. ; Singh et al. ; Jin et al. ). Bovine mastitis in cows is characterized by an increased presence of Mycoplasma spp, Streptococcus dysgalactiae , Streptococcus agalactiae , Streptococcus uberis, S. aureus, E. coli, Klebsiella pneumoniae, and Corynebacterium bovis in cow milk (Falentin et al. ; Belay et al. ; Girma and Tamir ; Morales-Ubaldo et al. ). Belay et al. identified S. aureus (42.6%), Streptococcus spp. (26.2%), non- aureus staphylococci (14.8%), E. coli (11.5%), Salmonella spp (3.3%), and K. pneumoniae (1.6%) as the predominant bacterial species in 422 milk samples of lactating cows diagnosed with mastitis. Whilst the most abundant genera of bacteria in mastitis cow milk were reported to include Bacilli, Clostridia, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria, the dominant bacterial species include Pseudomonas koreensis, P. azotoformans , P. fragi, Acinetobacter guillouiae , and Mycobacterium bovis (Khasapane et al. ). In addition to the core microbial groups in bovine milk as previously mentioned, bovine mastitis results in the additional presence of enriched bacterial species including Staphylococcus hominis , Lactobacillus acidipiscis, and four unknown species of the genus Tetragenococcus, Mogibacterium , Jeotgalicoccus, Hymenobacter , Lachnospiraceae, and Anaerococcus (Alessandri et al. ; Burakova et al. ). The decrease in the abundance of beneficial bacterial groups such as Atopostipes, Massilia , Acetitomaculum , Ralstonia in the milk from cows with mastitis demonstrates their role in the maintenance of eubiosis and healthy milk microbiota (Burakova et al. ). Interestingly, the milk microbiota of goats with mastitis are highly diverse and appear to be dominated by Fusobacterium, Bacteroides, and Proteobacteria when compared to milk from healthy goats (Polveiro et al. ; Toquet et al. ). Analysis of milk from ewes with mastitis revealed the same genera constituting the core microbiota (in healthy ewes) as mentioned above as well as Clostridium spp, Turicibacter spp, Romboutsia spp, Jeotgalicoccus spp, Pseudomonas spp, and Alloicoccus spp (Esteban-Blanco et al. ). In the same vein also, milk from ewes previously known to suffer from mastitis had diverse bacterial species consisting of Sphingobacterium spiritivorum, Staphylococcus warneri, S. schleiferi , S. equorum , S. haemolyticus, S. felis, Pseudomonas aeruginosa, Enterococcus hirae, Clavibacter michiganensis, Bacillus pumilus , Mannheimia haemolytica, and Corynebacterium spp (Gelasakis et al. ; Castro et al. ). Recently, (Couvillion et al. ) suggested the existence of a causal relationship between host phenotypes with mastitis and milk microbiota. Characterization of milk microbiome The experimental methodologies for profiling the milk microbiota continue to evolve. The advent of high throughput technologies for microbiota characterization in addition to the classic microbiological methods shows that the methods/techniques used in microbiota profiling are pivotal in uncovering the observed taxa or bacterial groups in milk (Lopez Leyva et al. ; Notarbartolo et al. ; Selma-Royo et al. ; Cheema et al. ). So far, the experimental methods for the analysis of milk microbiota rely on both the traditional culture-dependent techniques as well as culture-independent methods (Fig. ) which primarily depend on the nucleic acids sequence-based approaches, including amplicon sequencing, shotgun metagenomics, and metatranscriptomics (Table ). Culture-dependent approaches: milk culturomics The initial microbiological studies on milk relied on traditional culture-dependent techniques to characterize milk microorganisms. The culture-based techniques assess the morphological, phenotypic, and biochemical characteristics of the isolated strains which are sometimes genotypically identified. Culture-based conditions are often biased toward identifying pathogens as well as viable and dominant bacteria. However, fastidious, non-culturable, and less occurring bacteria are usually not detected (Ruiz et al. ; Lopez Leyva et al. ; Cheema et al. ). Though powerful in profiling the viability of specific milk-borne bacteria, culture-based techniques only reveal limited taxa capable of withstanding sampling procedures, transportation, storage, and experimental/laboratory conditions. Consequentially, these techniques could selectively reduce the depth of the overall microbial community, detecting only a minute diversity of the bacterial taxa in milk (Browne et al. ; Sakwinska and Bosco ; Cheema et al. ). For instance, from the 554 bacterial species identified in human milk, only 210 species (38%) were detected by culture-based techniques (Togo et al. ). Through these techniques, the presence of dominant facultative anaerobes and pathogenic bacteria associated with mammary infections including Streptococcus, Staphylococcus, Propionibacterium , and Corynebacterium (Martín et al. ; Ruiz et al. ). Additionally, bifidobacteria and several lactic acid bacteria, especially Enterococcus , Weissella , Lactobacillus , Lactococcus, Leuconostoc , etc. have been successfully detected in milk after using nutrient-specific culture media and regulated incubation conditions (Abrahamsson et al. ; Albesharat et al. ; Martín et al. ; Murphy et al. ; Breitenwieser et al. ; Selma-Royo et al. ; Damaceno et al. ; Wang et al. ). Despite the limitations of the culture-based techniques, culturing facilitates the exploitation and preservation of bacterial strains for potential applications in biotechnological, health, and agrifood systems. Apart from the frequent milk microbiological studies involving the distribution of [pathogenic] bacteria as well as their antimicrobial resistance and virulence determinants, potentially beneficial traits such as probiotic properties, bacteriocin production, and other biotechnologically important potentials are extensively sought from milk-borne bacteria (Zhang et al. ; Kim et al. ; Damaceno et al. ; Asha et al. ; da Cunha et al. ; Elnar and Kim ). The recent decades have witnessed the advent and development of a culture-based microbiota approach known as culturomics. Culturomics is a highly effective culture-dependent technique that uses high-throughput and specific microbial culture conditions for large-scale isolation and rapid identification of bacteria in a community (Lagier et al. , ; Ruiz et al. ; Cheema et al. ). Culturomics approach facilitates the collection of a comprehensive repertoire of the microbiota and also the detection of species with low abundance which are often undetectable by culture-independent methods, including metagenomics (Seck et al. ; Wang et al. ). While culturomics may not be effective and sufficient in quantifying species abundance, it is however the most suitable approach to obtain a comprehensive and viable repertoire of the microbiota (Dickson ; Togo et al. ). The use of culturomics techniques has successfully led to the isolation and identification of a large repertoire of previously undetected and unculturable bacteria from the gut (Lagier et al. , ; Cheema et al. ). Previous studies have optimized a variety of rapid, economical, and effective culturomics techniques for the isolation of different types of bacteria from the gut microbiota of both humans and animals (Lagier et al. , ; Chang et al. ; Hou et al. , ; Wang et al. ; Wan et al. ; Huang et al. ). Unlike the gut microbiota, there is yet a widely recognized robust culturomics technique designed specifically for milk microbiota. Recently, Wang et al. successfully characterized the breast milk microbiota using a viable and effective culturomics strategy. Their study provided a solid foundation for the future application of the culturomics approach in milk microbiota research. Using four different culture media, conditions, and MALDI-TOF MS analysis, they identified 6601 colonies and obtained 865 bacterial strains, representing 54 species, 21 genera, and 4 phyla. Furthermore, they reportedly cultivated over 94.4% of the total bacterial species present in the milk samples with high diversity and a 57.0% reduction of workload (Wang et al. ). Previously also, Togo and colleagues (Togo et al. , ) developed culturomics techniques for characterizing human milk microbiota from healthy breastfeeding African women. These separate studies isolated novel bacterial species, including Anaerolactibacter massiliensis , Acidipropionibacterium timonense, Lactomassilus timonensis , Lactimicrobium massiliense , and Galactobacillus timonensis using culture-based culturomics approach. These few instances where culturomics was applied in milk microbiota (Togo et al. , ; Wang et al. ) improved our understanding of the robustness of culture-dependent techniques in characterizing viable microbial community, establishing, and preserving a comprehensive repertoire of bacteria in milk. As current knowledge of microbiota continues to increase, traditional culture-dependent methods may proportionally evolve to provide cost-effective strategies for identifying novel microorganisms and understanding the microbiota. Culture-independent omics approaches The development of culture-independent high throughput sequencing (‘omics’) technologies has revealed the ecological intricacies of microbial communities across a wide range of environments, including milk. These technologies allow biologists to study the true diversity of the bacterial world thus, revolutionizing the fields of microbiology and microbial ecology. Since culture-dependent techniques only reveal culturable bacteria which represent a fraction of bacterial communities in a niche, culture-independent approaches use DNA, RNA, proteins, and metabolites to characterize the microbiota (Ruiz et al. ; Chakraborty et al. ; Naqvi et al. ). Despite experimental biases and other limitations, the culture-independent techniques can detect previously unknown or yet-to-be-cultured bacterial groups with high sample throughput regardless of microbial viability. Additionally, omics technologies have been used to identify the relationships between microbial composition or structure with functions (Couvillion et al. ). A wide range of culture-independent omics approaches for milk microbiota profiling are available and have been increasingly used in the last two decades. Sequencing-based omics techniques 16S and shotgun metagenomics The 16S rRNA gene amplicon sequencing is the most commonly used culture-independent approach for milk microbiota profiling. While this approach provides comprehensive compositional, structural, and taxonomic information, shotgun metagenomics provides deeper insight into fine-resolution taxonomic diversity (at species, subspecies, and strain levels) and functional features in microbial communities by analyzing gene sequences that encode for functional RNAs or proteins (Moossavi et al. ; Couvillion et al. ; Sun et al. ). Over the years, several studies have characterized the milk microbiota from both healthy and diseased (mastitis) hosts (e.g. humans, cows, goats, etc.) using metagenomics (Kordy et al. ; Olshan et al. ; Polveiro et al. ; Dahlberg et al. ; Khasapane et al. ; Alessandri et al. ; Burakova et al. ; Ajeeb et al. ). Recent studies used metagenomic sequencing to characterize and compare the milk microbiota from cows with mastitis and healthy controls with some studies annotating the metagenomic sequences to identify and relate the microbiota with functional genes and metabolic pathways (Hoque et al. ; Tarrah et al. ; Alessandri et al. ; Khasapane et al. ; Sahoo et al. ; Zhang et al. , ; Ran et al. ; Sabarish and Dhanasekaran ; Salman et al. ). Similarly, metagenomic analysis has been used in different studies to characterize the microbiota of human breast milk from women with and without mastitis (Jiménez et al. ; Boix-Amorós et al. ; Hoque et al. ; Asbury et al. ; Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Chen et al. ; Filatava et al. ; Treven et al. ; Ran et al. ; Endika et al. ). In addition to microbial composition, structure, and diversity identified by metagenomics, other genomic determinants, including immunologic profiles, metabolic, virulence, and antibiotic resistance determinants have been associated with microbiota perturbations and mastitis in women (Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Treven et al. ; Ran et al. ; Endika et al. ). The characterization of the microbiota using sequenced-based approaches involves the profiling of the whole set of microbial genomes within the community or target sequencing of the 16S RNA gene. The latter combines the amplification and sequencing of a fragment of the 16S rRNA gene to characterize the microbiota. Being the most conserved and targeted gene in bacteria, the 16S rRNA gene carries hypervariable regions (V1–V9) that bind to a pair of primers and then get amplified to capture taxonomic information (Addis et al. ; Sarangi et al. ; Parente et al. ; Lopez Leyva et al. ). While the 16S rRNA remains the most common approach in characterizing the milk microbiota, some limitations, including primer specificity, low bacterial load in milk, varied experimental platforms and procedures, variability in diversity estimates, loss of diversity due to amplification biases, etc. have been recognized (Jumpstart Consortium Human Microbiome Project Data Generation Working Group ; Logares et al. ; Fitzstevens et al. ; Sarangi et al. ; Lopez Leyva et al. ). These contribute to the inconsistent reports on prevalent or core genera or species associated with milk from healthy or mastitis-suffering hosts. Alternatively, shotgun metagenomics aims at sequencing DNA directly from available genomic material in a sample and then assembles contiguous sequences or entire genomes in order to assign high-resolution functional and taxonomic information (Sarangi et al. ; Almeida et al. ; Peterson et al. ; Usyk et al. ). Unlike the 16S rRNA gene sequencing, this approach does not target or amplify a specific gene. Therefore, it overcomes the bias associated with gene amplification and is generally regarded as the gold standard for microbiome characterization (Lopez Leyva et al. ). Large amounts of reads that are complex and difficult to assemble de novo are often generated and usually mapped and annotated for quantitative analysis using different bioinformatics pipelines and databases for high-resolution taxonomic and functional characterization of metagenomes in microbial communities. Metagenomic sequencing approaches are DNA-based techniques that describe the presence of microorganisms and genes within the community but are incapable of characterizing transcriptional profiles of the entire microbial community or individual microorganisms in the community (Couvillion et al. ; Cheema et al. ). Metatranscriptomics The previously discussed sequenced-based metagenomics approaches describe the structure and composition of microbes and genes within a community but not the functional activity of individual organisms or the whole community. Metatranscriptomics characterizes the transcriptional profiles of microbial communities, and it therefore provides insight into the active functional profile of the microbiome (Aguiar-Pulido et al. ; An et al. ; Arıkan and Muth ). Under specific conditions, metatranscriptomics analysis examines the total RNA within the community (metatranscriptome), which provides useful information on gene expression within the community over time. By capturing the total mRNA in a sample under specific conditions, the metatranscriptome provides information on the gene expression within the community at a specific time. Using metatranscriptomics, the pool of RNA transcript expressed in a community at a given time is analyzed, thus simultaneously allowing the characterization of both microbial abundance (rRNA) and gene expression (mRNA) in a community (Tveit et al. ; Addis et al. ; Zhang et al. , ). Metatranscriptomics was initially applied using hybridization or qPCR-based techniques (Higuchi et al. ; Simon and Daniel ). However, with the advancement in sequencing technologies, RNA-Seq has been established as the gold standard mainly due to the lack of reference isolates and the high diversity of microbial communities (Zhang et al. , ; Arıkan and Muth ). Metatranscriptomics has been used in both humans and animals to characterize the temporal gene expression and functional analysis in mammary cells and milk during the lactation cycle (Martin Carli et al. ; Twigger et al. ; Wu et al. ; Xuan et al. ; Smilowitz et al. ; LeMaster et al. ; Xia et al. ; Doerfler et al. ; Zorc et al. ; Pozovnikova et al. ). Milk metatranscriptomics studies have primarily centered on examining host RNA in milk for information regarding host cell function and health (e.g. somatic cells) rather than specific or overall community microbial functions (Couvillion et al. ). In a recent study integrating milk metagenomics and metatranscriptomics, Zhang et al. demonstrated the association between elevated somatic cell counts and high relative abundance of Sphingomonas and Ralstonia in cows with subclinical mastitis. The expression of bovine uridine phosphorylase 1 and transcobalamin 1 positively correlated with the relative abundance of Sphingomonas and Ralstonia . Their study further revealed distinct functional alternations in some microbial processes. Another obstacle to limited microbial metatranscriptomics study is the difficulty in differentiating between microbial and host RNA in milk. As earlier stated, microbial biomass in milk is low and could easily be dominated by the more abundant host RNA (Couvillion et al. ). In addition to providing active functional information, total RNA metatranscriptomics could also provide compositional and taxonomic insights into microbial communities (Xue et al. ; Hempel et al. ; Thøgersen et al. ). Integrating metagenomic data can facilitate metatranscriptomics analyses and assembly (Wu et al. ; Hempel et al. ; Zhang et al. , ). Simultaneous profiling of microbial and host transcriptome to characterize microbial and host responses in disease has been reported (Pérez-Losada et al. ; Castro-Nallar et al. ; Ramos-Tapia et al. ). Therefore, this dual and integrated approach could be applied in milk to understand functional and taxonomic interactions between microbes and hosts during mastitis. The human milk microbiota (HMM) is highly diverse and complex, consisting of over 800 species of bacteria with the majority being obligate aerobic or facultative anaerobic bacteria (Togo et al. ; Lyons et al. ; Notarbartolo et al. ; Ajeeb et al. ; Power et al. ; Dombrowska-Pali et al. ). The presence of these anaerobic bacteria in HMM is known to beneficially impact infants’ health and well-being (Lyons et al. ; Kashyap and Choudhari ; Dombrowska-Pali et al. ). Over the years, several studies have characterized the HMM using both the culture-dependent and culture-independent approaches with the latter widely used in recent years. In a systematic analysis comprising 15,489 milk samples from 11,124 women across 38 countries, 820 bacterial species that belonged to 178 genera, 92 families, 52 orders, 24 classes, and 13 phyla were identified from human milk (Togo et al. ). While some phyla (e.g. Fusobacteria, Deferribacterota, Cyanobacteria) have relatively lower abundance, commonly identified genera in milk from healthy women include Staphylococcus , Streptococcus , Corynebacterium , Pseudomonas , Serratia , Propionibacterium , Bradyrhizobium, Sphingomonas, Ralstonia , Cutibacterium , Enterococcus , Lacticaseibacillus , Lactiplantibacillus, Limosilactobacillus, Lactococcus , Lactobacillus , Leuconostoc , Bifidobacterium , and Weissella and other taxonomically-related Gram-positive bacteria (Togo et al. ; Fernández et al. ; Ajeeb et al. ). Although previous studies involving multiple countries demonstrated varied HMM across geographical locations, however, consistent and universal members of the HMM were identified as the core genera in all the samples analyzed (Fitzstevens et al. ; Lackey et al. ). Increasing reports have demonstrated the HMM to contain organized consortia and networks of bacteria that are often stable in structure, diversity, and abundance throughout the lactation period (Sam Ma et al. ; Drago et al. ; Fernández et al. ; Holdsworth et al. ). Regardless of the maternal body mass index (BMI), health, diet, demographics, and geography, four dominant phyla including Bacteroidetes, Proteobacteria, Firmicutes, and Actinobacteria are usually identified across human milk samples (Togo et al. ; Fernández et al. ; Notarbartolo et al. ; Banić et al. ; Dinleyici et al. ; Wang et al. ; Ajeeb et al. ). Among these dominant phyla, previous research has documented a nine-genera core bacteria that constitute HMM, comprising Staphylococcus , Streptococcus , Corynebacterium , Pseudomonas , Serratia , Propionibacterium , Bradyrhizobium, Sphingomonas and Ralstonia (Demmelmair et al. ; Moubareck ; Notarbartolo et al. ; Wang et al. ; Dinleyici et al. ; Dombrowska-Pali et al. ). Interestingly, the core bacteria in human milk represent about half of the HMM. However, their relative abundance appears to be variable among milk samples, geography, experimental techniques, and analysis (Diez-Sampedro et al. ; Moubareck ; Cheema et al. ). In addition, potential mother-to-infant microbial transmission through breastfeeding, as is the case with S. aureus, which can also colonize the infant intestine, has been previously shown (Benito et al. ). The last two decades have witnessed a substantial increase in research exploring the entire bovine milk microbiota (BVM) rather than specific milk-borne pathogens. Several comparative studies have consistently reported the differences in the composition and structure of BVM in both healthy and diseased (mastitis) cows (Derakhshani et al. ; Hoque et al. , ; Couvillion et al. ; Khasapane et al. ; Yang et al. ; Power et al. ; Guo et al. ; Salman et al. ). The complexity of the BVM shows that bovine milk contains a highly abundant, complex, and diverse microbial community. Similar to the HMM, the BVM harbors core phyla and genera that are considerably conserved and consistently appear in at least 95% of all bovine milk samples regardless of dietary, environmental, and individual variations in cows (Astudillo-García et al. ; Moossavi et al. ; Ryu et al. ; Guo et al. ). While some studies have reported inconsistencies in the compositions of the BVM across individuals and geographical locations, others have shown relative stability in core microbial groups as well as their overall metabolic/physiological properties and functionalities (Mizrahi et al. ; Guo et al. ). A recent study indicated the presence of 119 bacterial species from 202 genera, 124 families, 82 orders, 33 classes, and 95 phyla from 166 composite milk samples obtained from 166 individual dairy cattle in South Africa (Khasapane et al. ). Notably, four core-phyla including Proteobacteria, Firmicutes, Bacteroidota, and Actinobacteria were present in over 97% of the total samples evaluated. In a previous study comprising 112 milk samples from individual cows from 10 different farms in the Shanghai region of China, 33 phyla and 785 genera were detected (Li et al. ). The core bacterial groups identified included four phyla [Bacteroidetes (7.47%), Actinobacteria (9.40%), Proteobacteria (39.0%), and Firmicutes (40.8%)] and four genera [ Acinetobacter (10.2%), Lactococcus (11.7%), Bacillus (13.8%), and Pseudomonas (19.6%)] in all the samples. Similarly, several recent studies separately revealed the presence of the four core-phyla from milk samples from dairy cattle in Ireland, Pakistan, Turkey, Japan, China, Italy, Bangladesh, Korea (Hoque et al. ; Ryu et al. ; Kizil et al. ; Yang et al. ; AoDaohu et al. ; Yap et al. ; Salman et al. ). The core BVM is generally believed to consist of Bacteroides , Staphylococcus , Lactobacillus , Propionibacterium, Enterococcus , Streptococcus , Lactococcus , Porphyromonas , Corynebacterium , Fusobacterium, and Pseudomonas (Addis et al. ; Hoque et al. ; Oikonomou et al. ; Porcellato et al. ; Power et al. ; Guo et al. ). Interestingly, some of these genera are often associated with healthier udder-quarters in cows (Addis et al. ). The structure and composition of the milk microbiota of small ruminants are highly variable probably due to the limited number of studies as well as several environmental and individual species/breed-specific factors. Milk microbiota of small ruminants such as goat, sheep, reindeer, and water deer show significant differences, thus suggesting environmental influences, host-associated factors, and host-microbial adaptation as major drivers of microbial composition and structure (Li et al. ; Oikonomou et al. ; Polveiro et al. ; Guo et al. ; Hoving-Bolink et al. ). So far, there has been no consensus on the overall core microbial phyla or genera in the milk of small ruminants across various species or breeds. Out of 31 and 43 phyla identified from the milk samples of 212 Spanish Churra sheep breed and 50 Assaf ewes, Actinobacteria, Bacteroidetes, Firmicutes, and Proteobacteria accounted for 97.4% and 90.07% of all the samples examined (Esteban-Blanco et al. , ). While Actinobacteria, Firmicutes, and Proteobacteria were reported as the core phyla in the milk microbiota in sheep (Esteban-Blanco et al. , ), the presence of other recurring phyla, especially Bacteroidetes, Acidobacteria, Cyanobacteria, Fusobacteria, have been documented as well (Castro et al. ; Esteban-Blanco et al. , ). Although over 1000 genera were identified from sheep milk, studies have shown Corynebacterium , Lactobacillus , Staphylococcus , Streptococcus, and Escherichia/Shigella to be the core microbiota of milk from healthy sheep (Castro et al. ; Esteban-Blanco et al. , ; Toquet et al. ). However, host-related factors such as breed as well as geographic locations have been suggested to impact milk microbiota composition in sheep (Castro et al. ; Esteban-Blanco et al. ). Goat milk microbiota is reported to primarily contain Firmicutes and Proteobacteria as the core phyla and to a minor extent, Actinobacteria (Li et al. ; Zhang et al. ; Niyazbekova et al. ; Polveiro et al. ; Lauková et al. ; Hoving-Bolink et al. ). These phyla usually constitute about 90% of the total bacterial phyla in milk from healthy goats. Furthermore, studies have shown phylum-level variations in goat milk microbiota composition during the lactation period (McInnis et al. ; Zhang et al. ; Niyazbekova et al. ). In a recent study, Hoving-Bolink et al. identified Lactococcus , Staphylococcus , Pseudomonas , Acinetobacteria , Corinebacteria , and Microbacteria as the core genera in healthy goat milk. Polveiro et al. reported the presence of Staphylococcus spp, Brevidabacterium spp, Enterococcus , and Bacteroides spp in all the milk samples examined, including healthy goats and goats diagnosed with clinical, subclinical, and gangrenous mastitis. Whereas Curtobacterium , Staphylococcus , and Bifidobacterium were reported as the core genera in milk samples from healthy goats across farms in central and eastern Slovakia, the presence of Enterococcus , Lactococcus , Streptococcus , Lacticaseibacillus , and Lactobacillus was also prevalent (Lauková et al. ). Factors including animal breed, sample origin, farm location, and management appear as the key drivers in goat milk microbial composition and structure. Intramammary infections resulting in mastitis constitute a common disease in mammalian species globally. Mastitis usually causes a significant decrease in milk production, dysbiosis of the milk microbiota, undesired weaning, premature culling, difficulty in conception, and treatment costs in both humans and animals (Wolfenson et al. ; Boix-Amorós et al. ; Fernández et al. ; Wang et al. ; Borş et al. ; Ito et al. ; Crippa et al. ). Mastitis affects approximately 10 to 33% of all lactating women resulting in severe public health problems in both infants and mothers (Pevzner and Dahan ; Wilson et al. ). In animals especially cattle, up to 15% decrease in milk production, overall animal well-being, and behavioral changes due to mastitis have been well-documented (Addis et al. ; Toquet et al. ; Morales-Ubaldo et al. ). Mastitis is often characterized by the dysbiosis of the milk (and mammary) microbiota, depending on the clinical manifestations (clinical or subclinical) or the course (acute, granulomatous, and subacute) (Angelopoulou et al. ; Demmelmair et al. ; Fernández et al. ; Dobrut et al. ). The analysis of mastitis milk from humans and animals provides new insights into the extent of milk microbiota perturbations as well as the ecology of mastitis-associated etiologies. As previously mentioned, milk from healthy hosts contains highly diverse bacteria with the majority regarded as nonpathogenic and often unrelated to mastitis. The role of these bacterial groups in the initiation, progression, and prevention of mastitis is not fully elucidated. However, emerging evidence showed the potential impact of the milk microbiota in the development of mastitis (Hoque et al. ; Ito et al. ; Yang et al. ; AoDaohu et al. ; Guo et al. ; Salman et al. ). Milk from mastitis-suffering animals and women shows distinct microbiota composition and structure when compared to healthy hosts (Mediano et al. ; Hoque et al. ; Selma-Royo et al. ; Toquet et al. ; Kizil et al. ; Yang et al. ; Salman et al. ). The milk microbiota from acute and subacute mastitis is often distinct in both structure and composition with an abundant presence of aerotolerant bacteria, especially Staphylococcus , and significantly reduced diversity and beneficial obligate anaerobes, including Faecalibacterium , Eubacterium, Ruminococcus , etc. (Patel et al. ; Derakhshani et al. ; Esteban-Blanco et al. ; Boix-Amorós et al. ). In a study consisting of 1849 milk samples from individual lactating women with mastitis (acute/subacute), 91.56% and 29.74% of the milk examined revealed the presence of Staphylococcus epidermidis and Staphylococcus aureus (Mediano et al. ). Additionally, streptococci (70.20%) and corynebacteria (16.60%) constituted the dominant microbial groups from the milk analyzed. The presence of S. epidermidis was previously reported in 85% of milk from women with mastitis (Delgado et al. ). While Staphylococcus is the core genus associated with acute and subacute mastitis, S. aureus and S. epidermidis are the common staphylococci frequently isolated from mastitis milk (Jiménez et al. ; Boix-Amorós et al. ). Although Pseudomonas, Klebsiella, Serratia, Ralstonia, Aeromonas, and Enterococcus are other enriched and frequently isolated genera from mastitis milk, Clostridium, Ruminococcus, Faecalibacterium, Acinetobacter, and Eubacterium are consistently depleted in milk samples of subacute and acute mastitis (Patel et al. ; Angelopoulou et al. ; Hoque et al. ). The lower microbial diversity characterizing mastitis milk microbiota consequentially increases the presence of opportunistic pathogens Escherichia coli , Bacillus subtilis, B. cereus, E. faecalis, S. epidermidis, S. hominis, and K. pneumoniae. In comparison to milk from healthy individuals, mastitis milk contains a significantly higher presence of Staphylococcaceae, Brucellaceae, Burkholderiaceae, Streptococcaceae, and Aeromonadaceae at the family level as well as a higher presence of Staphylococcus, Streptococcus, Ralstonia, Klebsiella, Aeromonas, Leptospira, and Proteus at the genus level (Boix-Amorós et al. ; Ito et al. ; Khasapane et al. ; Singh et al. ; Jin et al. ). Bovine mastitis in cows is characterized by an increased presence of Mycoplasma spp, Streptococcus dysgalactiae , Streptococcus agalactiae , Streptococcus uberis, S. aureus, E. coli, Klebsiella pneumoniae, and Corynebacterium bovis in cow milk (Falentin et al. ; Belay et al. ; Girma and Tamir ; Morales-Ubaldo et al. ). Belay et al. identified S. aureus (42.6%), Streptococcus spp. (26.2%), non- aureus staphylococci (14.8%), E. coli (11.5%), Salmonella spp (3.3%), and K. pneumoniae (1.6%) as the predominant bacterial species in 422 milk samples of lactating cows diagnosed with mastitis. Whilst the most abundant genera of bacteria in mastitis cow milk were reported to include Bacilli, Clostridia, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria, the dominant bacterial species include Pseudomonas koreensis, P. azotoformans , P. fragi, Acinetobacter guillouiae , and Mycobacterium bovis (Khasapane et al. ). In addition to the core microbial groups in bovine milk as previously mentioned, bovine mastitis results in the additional presence of enriched bacterial species including Staphylococcus hominis , Lactobacillus acidipiscis, and four unknown species of the genus Tetragenococcus, Mogibacterium , Jeotgalicoccus, Hymenobacter , Lachnospiraceae, and Anaerococcus (Alessandri et al. ; Burakova et al. ). The decrease in the abundance of beneficial bacterial groups such as Atopostipes, Massilia , Acetitomaculum , Ralstonia in the milk from cows with mastitis demonstrates their role in the maintenance of eubiosis and healthy milk microbiota (Burakova et al. ). Interestingly, the milk microbiota of goats with mastitis are highly diverse and appear to be dominated by Fusobacterium, Bacteroides, and Proteobacteria when compared to milk from healthy goats (Polveiro et al. ; Toquet et al. ). Analysis of milk from ewes with mastitis revealed the same genera constituting the core microbiota (in healthy ewes) as mentioned above as well as Clostridium spp, Turicibacter spp, Romboutsia spp, Jeotgalicoccus spp, Pseudomonas spp, and Alloicoccus spp (Esteban-Blanco et al. ). In the same vein also, milk from ewes previously known to suffer from mastitis had diverse bacterial species consisting of Sphingobacterium spiritivorum, Staphylococcus warneri, S. schleiferi , S. equorum , S. haemolyticus, S. felis, Pseudomonas aeruginosa, Enterococcus hirae, Clavibacter michiganensis, Bacillus pumilus , Mannheimia haemolytica, and Corynebacterium spp (Gelasakis et al. ; Castro et al. ). Recently, (Couvillion et al. ) suggested the existence of a causal relationship between host phenotypes with mastitis and milk microbiota. The experimental methodologies for profiling the milk microbiota continue to evolve. The advent of high throughput technologies for microbiota characterization in addition to the classic microbiological methods shows that the methods/techniques used in microbiota profiling are pivotal in uncovering the observed taxa or bacterial groups in milk (Lopez Leyva et al. ; Notarbartolo et al. ; Selma-Royo et al. ; Cheema et al. ). So far, the experimental methods for the analysis of milk microbiota rely on both the traditional culture-dependent techniques as well as culture-independent methods (Fig. ) which primarily depend on the nucleic acids sequence-based approaches, including amplicon sequencing, shotgun metagenomics, and metatranscriptomics (Table ). The initial microbiological studies on milk relied on traditional culture-dependent techniques to characterize milk microorganisms. The culture-based techniques assess the morphological, phenotypic, and biochemical characteristics of the isolated strains which are sometimes genotypically identified. Culture-based conditions are often biased toward identifying pathogens as well as viable and dominant bacteria. However, fastidious, non-culturable, and less occurring bacteria are usually not detected (Ruiz et al. ; Lopez Leyva et al. ; Cheema et al. ). Though powerful in profiling the viability of specific milk-borne bacteria, culture-based techniques only reveal limited taxa capable of withstanding sampling procedures, transportation, storage, and experimental/laboratory conditions. Consequentially, these techniques could selectively reduce the depth of the overall microbial community, detecting only a minute diversity of the bacterial taxa in milk (Browne et al. ; Sakwinska and Bosco ; Cheema et al. ). For instance, from the 554 bacterial species identified in human milk, only 210 species (38%) were detected by culture-based techniques (Togo et al. ). Through these techniques, the presence of dominant facultative anaerobes and pathogenic bacteria associated with mammary infections including Streptococcus, Staphylococcus, Propionibacterium , and Corynebacterium (Martín et al. ; Ruiz et al. ). Additionally, bifidobacteria and several lactic acid bacteria, especially Enterococcus , Weissella , Lactobacillus , Lactococcus, Leuconostoc , etc. have been successfully detected in milk after using nutrient-specific culture media and regulated incubation conditions (Abrahamsson et al. ; Albesharat et al. ; Martín et al. ; Murphy et al. ; Breitenwieser et al. ; Selma-Royo et al. ; Damaceno et al. ; Wang et al. ). Despite the limitations of the culture-based techniques, culturing facilitates the exploitation and preservation of bacterial strains for potential applications in biotechnological, health, and agrifood systems. Apart from the frequent milk microbiological studies involving the distribution of [pathogenic] bacteria as well as their antimicrobial resistance and virulence determinants, potentially beneficial traits such as probiotic properties, bacteriocin production, and other biotechnologically important potentials are extensively sought from milk-borne bacteria (Zhang et al. ; Kim et al. ; Damaceno et al. ; Asha et al. ; da Cunha et al. ; Elnar and Kim ). The recent decades have witnessed the advent and development of a culture-based microbiota approach known as culturomics. Culturomics is a highly effective culture-dependent technique that uses high-throughput and specific microbial culture conditions for large-scale isolation and rapid identification of bacteria in a community (Lagier et al. , ; Ruiz et al. ; Cheema et al. ). Culturomics approach facilitates the collection of a comprehensive repertoire of the microbiota and also the detection of species with low abundance which are often undetectable by culture-independent methods, including metagenomics (Seck et al. ; Wang et al. ). While culturomics may not be effective and sufficient in quantifying species abundance, it is however the most suitable approach to obtain a comprehensive and viable repertoire of the microbiota (Dickson ; Togo et al. ). The use of culturomics techniques has successfully led to the isolation and identification of a large repertoire of previously undetected and unculturable bacteria from the gut (Lagier et al. , ; Cheema et al. ). Previous studies have optimized a variety of rapid, economical, and effective culturomics techniques for the isolation of different types of bacteria from the gut microbiota of both humans and animals (Lagier et al. , ; Chang et al. ; Hou et al. , ; Wang et al. ; Wan et al. ; Huang et al. ). Unlike the gut microbiota, there is yet a widely recognized robust culturomics technique designed specifically for milk microbiota. Recently, Wang et al. successfully characterized the breast milk microbiota using a viable and effective culturomics strategy. Their study provided a solid foundation for the future application of the culturomics approach in milk microbiota research. Using four different culture media, conditions, and MALDI-TOF MS analysis, they identified 6601 colonies and obtained 865 bacterial strains, representing 54 species, 21 genera, and 4 phyla. Furthermore, they reportedly cultivated over 94.4% of the total bacterial species present in the milk samples with high diversity and a 57.0% reduction of workload (Wang et al. ). Previously also, Togo and colleagues (Togo et al. , ) developed culturomics techniques for characterizing human milk microbiota from healthy breastfeeding African women. These separate studies isolated novel bacterial species, including Anaerolactibacter massiliensis , Acidipropionibacterium timonense, Lactomassilus timonensis , Lactimicrobium massiliense , and Galactobacillus timonensis using culture-based culturomics approach. These few instances where culturomics was applied in milk microbiota (Togo et al. , ; Wang et al. ) improved our understanding of the robustness of culture-dependent techniques in characterizing viable microbial community, establishing, and preserving a comprehensive repertoire of bacteria in milk. As current knowledge of microbiota continues to increase, traditional culture-dependent methods may proportionally evolve to provide cost-effective strategies for identifying novel microorganisms and understanding the microbiota. The development of culture-independent high throughput sequencing (‘omics’) technologies has revealed the ecological intricacies of microbial communities across a wide range of environments, including milk. These technologies allow biologists to study the true diversity of the bacterial world thus, revolutionizing the fields of microbiology and microbial ecology. Since culture-dependent techniques only reveal culturable bacteria which represent a fraction of bacterial communities in a niche, culture-independent approaches use DNA, RNA, proteins, and metabolites to characterize the microbiota (Ruiz et al. ; Chakraborty et al. ; Naqvi et al. ). Despite experimental biases and other limitations, the culture-independent techniques can detect previously unknown or yet-to-be-cultured bacterial groups with high sample throughput regardless of microbial viability. Additionally, omics technologies have been used to identify the relationships between microbial composition or structure with functions (Couvillion et al. ). A wide range of culture-independent omics approaches for milk microbiota profiling are available and have been increasingly used in the last two decades. 16S and shotgun metagenomics The 16S rRNA gene amplicon sequencing is the most commonly used culture-independent approach for milk microbiota profiling. While this approach provides comprehensive compositional, structural, and taxonomic information, shotgun metagenomics provides deeper insight into fine-resolution taxonomic diversity (at species, subspecies, and strain levels) and functional features in microbial communities by analyzing gene sequences that encode for functional RNAs or proteins (Moossavi et al. ; Couvillion et al. ; Sun et al. ). Over the years, several studies have characterized the milk microbiota from both healthy and diseased (mastitis) hosts (e.g. humans, cows, goats, etc.) using metagenomics (Kordy et al. ; Olshan et al. ; Polveiro et al. ; Dahlberg et al. ; Khasapane et al. ; Alessandri et al. ; Burakova et al. ; Ajeeb et al. ). Recent studies used metagenomic sequencing to characterize and compare the milk microbiota from cows with mastitis and healthy controls with some studies annotating the metagenomic sequences to identify and relate the microbiota with functional genes and metabolic pathways (Hoque et al. ; Tarrah et al. ; Alessandri et al. ; Khasapane et al. ; Sahoo et al. ; Zhang et al. , ; Ran et al. ; Sabarish and Dhanasekaran ; Salman et al. ). Similarly, metagenomic analysis has been used in different studies to characterize the microbiota of human breast milk from women with and without mastitis (Jiménez et al. ; Boix-Amorós et al. ; Hoque et al. ; Asbury et al. ; Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Chen et al. ; Filatava et al. ; Treven et al. ; Ran et al. ; Endika et al. ). In addition to microbial composition, structure, and diversity identified by metagenomics, other genomic determinants, including immunologic profiles, metabolic, virulence, and antibiotic resistance determinants have been associated with microbiota perturbations and mastitis in women (Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Treven et al. ; Ran et al. ; Endika et al. ). The characterization of the microbiota using sequenced-based approaches involves the profiling of the whole set of microbial genomes within the community or target sequencing of the 16S RNA gene. The latter combines the amplification and sequencing of a fragment of the 16S rRNA gene to characterize the microbiota. Being the most conserved and targeted gene in bacteria, the 16S rRNA gene carries hypervariable regions (V1–V9) that bind to a pair of primers and then get amplified to capture taxonomic information (Addis et al. ; Sarangi et al. ; Parente et al. ; Lopez Leyva et al. ). While the 16S rRNA remains the most common approach in characterizing the milk microbiota, some limitations, including primer specificity, low bacterial load in milk, varied experimental platforms and procedures, variability in diversity estimates, loss of diversity due to amplification biases, etc. have been recognized (Jumpstart Consortium Human Microbiome Project Data Generation Working Group ; Logares et al. ; Fitzstevens et al. ; Sarangi et al. ; Lopez Leyva et al. ). These contribute to the inconsistent reports on prevalent or core genera or species associated with milk from healthy or mastitis-suffering hosts. Alternatively, shotgun metagenomics aims at sequencing DNA directly from available genomic material in a sample and then assembles contiguous sequences or entire genomes in order to assign high-resolution functional and taxonomic information (Sarangi et al. ; Almeida et al. ; Peterson et al. ; Usyk et al. ). Unlike the 16S rRNA gene sequencing, this approach does not target or amplify a specific gene. Therefore, it overcomes the bias associated with gene amplification and is generally regarded as the gold standard for microbiome characterization (Lopez Leyva et al. ). Large amounts of reads that are complex and difficult to assemble de novo are often generated and usually mapped and annotated for quantitative analysis using different bioinformatics pipelines and databases for high-resolution taxonomic and functional characterization of metagenomes in microbial communities. Metagenomic sequencing approaches are DNA-based techniques that describe the presence of microorganisms and genes within the community but are incapable of characterizing transcriptional profiles of the entire microbial community or individual microorganisms in the community (Couvillion et al. ; Cheema et al. ). Metatranscriptomics The previously discussed sequenced-based metagenomics approaches describe the structure and composition of microbes and genes within a community but not the functional activity of individual organisms or the whole community. Metatranscriptomics characterizes the transcriptional profiles of microbial communities, and it therefore provides insight into the active functional profile of the microbiome (Aguiar-Pulido et al. ; An et al. ; Arıkan and Muth ). Under specific conditions, metatranscriptomics analysis examines the total RNA within the community (metatranscriptome), which provides useful information on gene expression within the community over time. By capturing the total mRNA in a sample under specific conditions, the metatranscriptome provides information on the gene expression within the community at a specific time. Using metatranscriptomics, the pool of RNA transcript expressed in a community at a given time is analyzed, thus simultaneously allowing the characterization of both microbial abundance (rRNA) and gene expression (mRNA) in a community (Tveit et al. ; Addis et al. ; Zhang et al. , ). Metatranscriptomics was initially applied using hybridization or qPCR-based techniques (Higuchi et al. ; Simon and Daniel ). However, with the advancement in sequencing technologies, RNA-Seq has been established as the gold standard mainly due to the lack of reference isolates and the high diversity of microbial communities (Zhang et al. , ; Arıkan and Muth ). Metatranscriptomics has been used in both humans and animals to characterize the temporal gene expression and functional analysis in mammary cells and milk during the lactation cycle (Martin Carli et al. ; Twigger et al. ; Wu et al. ; Xuan et al. ; Smilowitz et al. ; LeMaster et al. ; Xia et al. ; Doerfler et al. ; Zorc et al. ; Pozovnikova et al. ). Milk metatranscriptomics studies have primarily centered on examining host RNA in milk for information regarding host cell function and health (e.g. somatic cells) rather than specific or overall community microbial functions (Couvillion et al. ). In a recent study integrating milk metagenomics and metatranscriptomics, Zhang et al. demonstrated the association between elevated somatic cell counts and high relative abundance of Sphingomonas and Ralstonia in cows with subclinical mastitis. The expression of bovine uridine phosphorylase 1 and transcobalamin 1 positively correlated with the relative abundance of Sphingomonas and Ralstonia . Their study further revealed distinct functional alternations in some microbial processes. Another obstacle to limited microbial metatranscriptomics study is the difficulty in differentiating between microbial and host RNA in milk. As earlier stated, microbial biomass in milk is low and could easily be dominated by the more abundant host RNA (Couvillion et al. ). In addition to providing active functional information, total RNA metatranscriptomics could also provide compositional and taxonomic insights into microbial communities (Xue et al. ; Hempel et al. ; Thøgersen et al. ). Integrating metagenomic data can facilitate metatranscriptomics analyses and assembly (Wu et al. ; Hempel et al. ; Zhang et al. , ). Simultaneous profiling of microbial and host transcriptome to characterize microbial and host responses in disease has been reported (Pérez-Losada et al. ; Castro-Nallar et al. ; Ramos-Tapia et al. ). Therefore, this dual and integrated approach could be applied in milk to understand functional and taxonomic interactions between microbes and hosts during mastitis. The 16S rRNA gene amplicon sequencing is the most commonly used culture-independent approach for milk microbiota profiling. While this approach provides comprehensive compositional, structural, and taxonomic information, shotgun metagenomics provides deeper insight into fine-resolution taxonomic diversity (at species, subspecies, and strain levels) and functional features in microbial communities by analyzing gene sequences that encode for functional RNAs or proteins (Moossavi et al. ; Couvillion et al. ; Sun et al. ). Over the years, several studies have characterized the milk microbiota from both healthy and diseased (mastitis) hosts (e.g. humans, cows, goats, etc.) using metagenomics (Kordy et al. ; Olshan et al. ; Polveiro et al. ; Dahlberg et al. ; Khasapane et al. ; Alessandri et al. ; Burakova et al. ; Ajeeb et al. ). Recent studies used metagenomic sequencing to characterize and compare the milk microbiota from cows with mastitis and healthy controls with some studies annotating the metagenomic sequences to identify and relate the microbiota with functional genes and metabolic pathways (Hoque et al. ; Tarrah et al. ; Alessandri et al. ; Khasapane et al. ; Sahoo et al. ; Zhang et al. , ; Ran et al. ; Sabarish and Dhanasekaran ; Salman et al. ). Similarly, metagenomic analysis has been used in different studies to characterize the microbiota of human breast milk from women with and without mastitis (Jiménez et al. ; Boix-Amorós et al. ; Hoque et al. ; Asbury et al. ; Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Chen et al. ; Filatava et al. ; Treven et al. ; Ran et al. ; Endika et al. ). In addition to microbial composition, structure, and diversity identified by metagenomics, other genomic determinants, including immunologic profiles, metabolic, virulence, and antibiotic resistance determinants have been associated with microbiota perturbations and mastitis in women (Castro et al. ; Ito et al. ; Ong et al. ; Sindi et al. ; Treven et al. ; Ran et al. ; Endika et al. ). The characterization of the microbiota using sequenced-based approaches involves the profiling of the whole set of microbial genomes within the community or target sequencing of the 16S RNA gene. The latter combines the amplification and sequencing of a fragment of the 16S rRNA gene to characterize the microbiota. Being the most conserved and targeted gene in bacteria, the 16S rRNA gene carries hypervariable regions (V1–V9) that bind to a pair of primers and then get amplified to capture taxonomic information (Addis et al. ; Sarangi et al. ; Parente et al. ; Lopez Leyva et al. ). While the 16S rRNA remains the most common approach in characterizing the milk microbiota, some limitations, including primer specificity, low bacterial load in milk, varied experimental platforms and procedures, variability in diversity estimates, loss of diversity due to amplification biases, etc. have been recognized (Jumpstart Consortium Human Microbiome Project Data Generation Working Group ; Logares et al. ; Fitzstevens et al. ; Sarangi et al. ; Lopez Leyva et al. ). These contribute to the inconsistent reports on prevalent or core genera or species associated with milk from healthy or mastitis-suffering hosts. Alternatively, shotgun metagenomics aims at sequencing DNA directly from available genomic material in a sample and then assembles contiguous sequences or entire genomes in order to assign high-resolution functional and taxonomic information (Sarangi et al. ; Almeida et al. ; Peterson et al. ; Usyk et al. ). Unlike the 16S rRNA gene sequencing, this approach does not target or amplify a specific gene. Therefore, it overcomes the bias associated with gene amplification and is generally regarded as the gold standard for microbiome characterization (Lopez Leyva et al. ). Large amounts of reads that are complex and difficult to assemble de novo are often generated and usually mapped and annotated for quantitative analysis using different bioinformatics pipelines and databases for high-resolution taxonomic and functional characterization of metagenomes in microbial communities. Metagenomic sequencing approaches are DNA-based techniques that describe the presence of microorganisms and genes within the community but are incapable of characterizing transcriptional profiles of the entire microbial community or individual microorganisms in the community (Couvillion et al. ; Cheema et al. ). The previously discussed sequenced-based metagenomics approaches describe the structure and composition of microbes and genes within a community but not the functional activity of individual organisms or the whole community. Metatranscriptomics characterizes the transcriptional profiles of microbial communities, and it therefore provides insight into the active functional profile of the microbiome (Aguiar-Pulido et al. ; An et al. ; Arıkan and Muth ). Under specific conditions, metatranscriptomics analysis examines the total RNA within the community (metatranscriptome), which provides useful information on gene expression within the community over time. By capturing the total mRNA in a sample under specific conditions, the metatranscriptome provides information on the gene expression within the community at a specific time. Using metatranscriptomics, the pool of RNA transcript expressed in a community at a given time is analyzed, thus simultaneously allowing the characterization of both microbial abundance (rRNA) and gene expression (mRNA) in a community (Tveit et al. ; Addis et al. ; Zhang et al. , ). Metatranscriptomics was initially applied using hybridization or qPCR-based techniques (Higuchi et al. ; Simon and Daniel ). However, with the advancement in sequencing technologies, RNA-Seq has been established as the gold standard mainly due to the lack of reference isolates and the high diversity of microbial communities (Zhang et al. , ; Arıkan and Muth ). Metatranscriptomics has been used in both humans and animals to characterize the temporal gene expression and functional analysis in mammary cells and milk during the lactation cycle (Martin Carli et al. ; Twigger et al. ; Wu et al. ; Xuan et al. ; Smilowitz et al. ; LeMaster et al. ; Xia et al. ; Doerfler et al. ; Zorc et al. ; Pozovnikova et al. ). Milk metatranscriptomics studies have primarily centered on examining host RNA in milk for information regarding host cell function and health (e.g. somatic cells) rather than specific or overall community microbial functions (Couvillion et al. ). In a recent study integrating milk metagenomics and metatranscriptomics, Zhang et al. demonstrated the association between elevated somatic cell counts and high relative abundance of Sphingomonas and Ralstonia in cows with subclinical mastitis. The expression of bovine uridine phosphorylase 1 and transcobalamin 1 positively correlated with the relative abundance of Sphingomonas and Ralstonia . Their study further revealed distinct functional alternations in some microbial processes. Another obstacle to limited microbial metatranscriptomics study is the difficulty in differentiating between microbial and host RNA in milk. As earlier stated, microbial biomass in milk is low and could easily be dominated by the more abundant host RNA (Couvillion et al. ). In addition to providing active functional information, total RNA metatranscriptomics could also provide compositional and taxonomic insights into microbial communities (Xue et al. ; Hempel et al. ; Thøgersen et al. ). Integrating metagenomic data can facilitate metatranscriptomics analyses and assembly (Wu et al. ; Hempel et al. ; Zhang et al. , ). Simultaneous profiling of microbial and host transcriptome to characterize microbial and host responses in disease has been reported (Pérez-Losada et al. ; Castro-Nallar et al. ; Ramos-Tapia et al. ). Therefore, this dual and integrated approach could be applied in milk to understand functional and taxonomic interactions between microbes and hosts during mastitis. Beyond structural and taxonomic profiling, it is important to evaluate the functional roles and phenotypic features driving microbial communities as well as their impact on the host. The use of mass spectrometry-based omics approaches such as metaproteomics, metametabolomics, and lipidomics to characterize microbial communities has facilitated our understanding of the functional roles of microorganisms within their communities. Metaproteomics characterizes the entire protein content of microbiota and thus provides a direct measure of the functionality of the microbiota (Lamendella et al. ; Arıkan and Muth ). Metaproteomics also enables the evaluation and identification of splicing variants, post-translational modifications, protein complexes, and protein–protein interactions of microbial communities across ecosystems (Ahrens et al. ; Addis et al. ). The significant technological advances in recent decades now allow the increasing use of the metaproteomics approach in microbiome research, partly due to its affordability, feasibility, optimized workflows, and the application of advanced metaproteomics and computational data analysis tools and pipelines (Schiebenhoefer et al. ; Kleiner ; Sajulga et al. ; Van Den Bossche et al. ; Arıkan and Muth ; Zhu et al. ; Petrone et al. ). Currently, metaproteomics is primarily used to characterize the structure of the microbiota based on protein biomass, microbial interactions, and substrate utilization by individual microbes as well as the overall community metabolism and physiology (Kleiner ; Zhao et al. ; Buthasane et al. ; Chen et al. ; Shi et al. , ). In recent years, several proteomics studies characterized the functional profiles and changes in milk peptides and proteins from healthy and mastitis-suffering hosts (Thomas et al. ; Tanamati et al. ; Bathla et al. ; Turk et al. ; Winther et al. ; Rešetar Maslov et al. ; Vanzin et al. ; O’Reilly et al. ). Metaproteomics profiling has been previously used to demonstrate proteins associated with antimicrobial resistance in raw bovine milk (Piras et al. ). Similarly, metaproteomics analysis was used to unravel the functional changes in the gastrointestinal tract microbiome of colorectal cancer patients and the associated taxonomic perturbations in gut bacteria (Long et al. ). The integration of metaproteomics in microbiome research bridges metagenomics and metatranscriptomics information to both phenotypic and metabolic information available in the metabolome (Van Den Bossche et al. ). Through metaproteomics characterization, functional profiling, differential abundance, and taxonomic analysis can be conducted at the protein or peptide level (Zhang and Figeys ). Metaproteome evaluation involves a dynamic and broad range peptide identification with high sensitivity hence, requiring the state-of-the-art approach such as coupling extensive liquid chromatography (LC) separation systems with high-resolution mass spectrometers (MS) (Verberkmoes et al. ; Arıkan and Muth ). However, the major challenges in the application of metaproteomics in milk microbiome research are the high diversity and broad range of protein abundance, the presence of host (contaminating) proteins in milk, standardization and reproducibility of experimental protocols, and the unavailability of optimized bioinformatics pipelines and databases for the annotation and construction sequences for peptide identification of many community members (Kolmeder and de Vos ; Tanca et al. ; Kunath et al. ; Zhang and Figeys ; Van Den Bossche et al. ; Blakeley-Ruiz and Kleiner ). Interestingly, integrating metagenomics data from the milk sample being studied is significant for providing insights for efficient annotation and identification of proteins (Tanca et al. ; Gouveia et al. ; Marzano et al. ). Metametabolomics systematically identifies and quantifies all metabolites (usually small molecules with molecular weights less than 2000 Da) produced by microbial communities (Grim et al. ; McAtamney et al. ). Metametabolomics not only provides overall information on the physiological state of the microbiome but also the signaling processes, pathway regulations, and phenotypes (Arıkan and Muth ). The metabolome of a microbial ecosystem is often regarded as the most direct indicator of the health (eubiosis) or dysbiosis (perturbations of homeostasis) of an ecosystem (Bernini et al. ). Metametabolomics evaluation of the microbiome also provides useful information about microbial interactions within the community, metabolite biomarkers, novel enzymes as well as the host environment (Bauermeister et al. ). The presence of certain environmental factors such as diet, environmental stressors, and xenobiotics may directly impact metabolomic profiling (Manor et al. ; Aguiar-Pulido et al. ). Metametabolomics has been broadly and increasingly applied in different areas including translational microbiome research. Different sizes and types of metabolites can be introduced to milk primarily as a result of microbial metabolism or from maternal origins through mammary epithelial cell secretions, serum, or somatic cell activity (Suh ; Ali et al. ; Stinson and George ; Hailemariam et al. ). Also, the overall quality and safety of milk, especially organoleptic properties, nutritional value, coagulation activity, heat, and storage stability can be influenced by the presence of certain metabolites (Couvillion et al. ; Hailemariam et al. ). While the majority of milk metabolomics studies aimed at unraveling the nutritional value of milk across animal species, humans, lactational stages, geography, or infant formula (Qian et al. ; Bardanzellu et al. ; Poulsen et al. ; Pintus et al. ; Su et al. ; Lemas et al. ; Yan et al. ), fewer (and recent) investigations are now analyzing the metametabolome of mastitis milk of both animal and human origins (Dervishi et al. ; Xi et al. ; Zhang et al. ; Miyata et al. ; Wang et al. ). Recent studies of milk metametabolomes reported higher amino and organic acids in milk from hosts suffering from mastitis (Zhu et al. , ; Miyata et al. ). Furthermore, pathway analysis demonstrated amino acids metabolism and energy metabolism as the major mechanisms of alterations in milk metabolomes during mastitis (Zhu et al. , ). Zhu et al. and Xi et al. similarly reported an association between mastitis and alterations in the biosynthesis of tryptophan, tyrosine, and phenylalanine in the tricarboxylic acid cycle as well as downregulation of energy, carbohydrate, and lipid. In their study, Gómez-Gallego et al. found a correlation between the relative abundance of breast milk bacteria and specific milk metabolites from breast milk obtained from 79 healthy lactating women from Spain, South Africa, Finland, and China. This suggests potential functional interactions between milk microbiota and metabolites. Microbial metabolomes are commonly characterized using either mass spectrometry (MS) (coupled with gas chromatography (GC), capillary electrophoresis (CE), liquid chromatography (LC)) or nuclear magnetic resonance (NMR) spectroscopy. NMR is economical, non-destructive, and has the advantage of direct detection and quantification of metabolites, however, it is relatively less sensitive with low throughput (Emwas et al. ; Arıkan and Muth ; Zniber et al. ). On the other hand, MS (coupled with LC or GS) is highly sensitivity and capable of detecting a wider range of molecules, however, it is a destructive technique and suffers technical challenges in quantification of molecules (Abram ; Arıkan and Muth ). The common limitations associated with the use of metabolomics in microbiome research are the non-uniformity in sample preparation and molecules profiling, the prevalence of unknown molecules in untargeted analysis, difficulties in annotation and bioinformatics analysis, and lack of unified databases (Alseekh et al. ; Cai et al. ; Niranjan et al. ). While the use of Mass spectrometry (MS) (especially MALDI MS) in disease diagnosis and biomarker profiling has yielded significant results over the years, it is however limited in some ways. MS techniques rely on complex and laborious upfront sample preparation and separation, sample quality/homogeneity, as well as time-consuming analysis of each sample by the mass analyzer (Piras et al. ; Karch et al. ). Additionally, the upfront sample preparation often results in the loss of high-resolution proteoform-related information due to enzymatic digestion (Schaffer et al. ; Melby et al. ; Kaulich et al. ). Ion mobility and dissociation efficiency of protein complexes also constitute additional challenges of mass spectrometric analysis (Marklund and Benesch ; Karch et al. ). To overcome the limitations associated with MS, liquid atmospheric pressure (LAP)-MALDI MS technique has been developed for high throughput mass spectrometric analysis of biomolecules for both disease diagnosis and phenotypic profiling of microbial communities (Hale et al. , ; Piras et al. ; Lellman et al. ; Challen et al. ). The analytical advances and new functionalities of LAP-MALDI MS use atmospheric pressure to analyze liquid samples (now easily prepared) which are introduced to analyzers with less matrix-cluster ions interference (Ryumin and Cramer ; Piras et al. ; Challen et al. ). In contrast to the conventional MALDI MS, LAP-MALDI allows stable ion yields, homogenous samples (liquid droplets), and record-breaking sample analysis speed (~ 60 samples per second). Additionally, LAP-MALDI is coupled with a heated inlet capillary that predominantly produces electrospray ionization (ESI)‐like multiply charged biomolecules (e.g., proteins and peptides) capable of detecting a limited m / z range (Hale et al. ; Krenkel et al. ; Challen et al. , ). In recent years, LAP-MALDI MS has been increasingly utilized for early and rapid detection of pre-clinical mastitis and the diagnosis of diseases, including bovine tuberculosis with high specificity and sensitivity (Hale et al. ; Piras et al. ; Krenkel et al. ; Challen et al. ; Lellman et al. ). To analyze milk microbial and/or host-associated biomarkers and AMR determinants in response to mastitis, only small amounts of milk are required. These small volumes of milk are prepared using a rapid preparation protocol that includes a one-pot/two-step method that allows the detection of proteins, peptides, and lipids within the mass spectral profile, hence rapidly detecting mastitis usually days before the onset of clinical manifestations (Hale et al. ; Piras et al. ). LAP-MALDI is designed to work with high-throughput mass spectrometers to simultaneously detect a wide range of heterogeneous metabolites and biomolecules, including proteins, peptides, and lipids (Piras et al. ; Challen et al. ). In a study involving 135 milk samples from 109 cows, a high abundance of many multiply charged ions in the m / z range of 600–1000 and attributable to peptides were detected in mastitis samples (Hale et al. ). Deconvolution of the mass spectra indicated multiply charged ions of m / z 600 and above in mastitis milk samples (62/135). LAP-MALDI MS was also used in another study to detect clinical and pre-clinical bovine mastitis from approximately 12,000 milk samples obtained from 500 cows within 6 months. LAP-MALDI MS profiles showed the presence of lipids (with the most abundant ions originating from triacylglycerols, phosphocholines, diacylglycerols, and sphingomyelins) and proteins/protein fragments as multiply charged ion species acquired over the m/z range of 100–2000 (Piras et al. ). Furthermore, a high abundance of isracidin-containing peptide ion, β-casein, and α-s 1 -casein fragments was identified in mastitis milk as earlier reported (Hale et al. ). The high sensitivity and specificity of LAP-MALDI MS allow rapid detection of bovine mastitis two days before the onset of clinical signs and symptoms (Piras et al. ). In addition to the detection of bovine mastitis, LAP-MALDI MS has been used to detect antimicrobial resistance (AMR) determinants in mastitis milk. Using a similar protocol and experimental setup for mastitis detection, LAP-MALDI MS can effectively and simultaneously detect beta-lactamase-based AMR from milk even faster and simpler than mastitis, obtaining the required mass spectral data within a few seconds (Piras et al. ). Unlike conventional AMR testing protocol, the advances in LAP-MALDI MS significantly reduce the overall time to detect AMR effectively. Lipids are another major component of mammalian milk that play vital biological roles. The milk lipidome contains complex and strategically packaged fat globules that encapsulate triacylglycerides within its core while the outer membrane is made of phospholipids and cholesterol (George et al. ). Milk lipidomics analysis characterizes and quantifies the structure and function of intact lipid molecules in milk (Yue et al. ; Liu and Rochfort ). The increasing use of lipidomics profiling in microbiome research in recent years has not only revealed the interplay between the microbiota and lipids but also provides valuable insights into disease diagnosis (Hornburg et al. ; Walczak-Skierska et al. ; Pan et al. ; Thangaraj et al. ; Li et al. , ). Recent reports of milk lipids profiling of several mammalian species have provided a comprehensive dataset that describes 3454 triacylglycerides and 514 polar lipid molecules (Liu et al. , ; Manis et al. ; Zhang et al. ; Zhao et al. ; Gao et al. ; Sun et al. ; Wu et al. ; Pan et al. ). Milk lipidomics analysis has provided useful insights into several bioactive lipids-mediated inflammatory responses and changes in lipid contents associated with mastitis and milk microbiota dysbiosis (Ceciliani et al. , ; Couvillion et al. ; Luo et al. ). Notable changes in milk lipidome, lipids metabolism, and eventual decrease in milk quality have been observed in milk from mastitis-suffering hosts (Ceciliani et al. , , ; Ganeshalingam et al. ; George et al. ; Luo et al. ; Pan et al. ; Hyötyläinen et al. ). For instance, Ceciliani et al. reported significant changes in major lipid groups including sphingomyelins and triacylglycerols in milk from cows with non- aureus staphylococci associated subclinical intramammary infection. Similarly, Luo et al. showed a correlation between the changes in mastitis milk lipid content and the metabolisms of glycerol phospholipid, arachidonic acid, and α -linolenic acid, thus, describing the role of mastitis in triggering abnormal lipid metabolism as well as driving milk microbiota diversity. This further highlights the milk lipidome as a potential biomarker for mastitis diagnosis and milk safety assessment. The rapid innovations and diversification in lipidomics methods (e.g. mass spectrometry techniques) have facilitated the continuous improvement of milk lipidomics, unraveling previously unknown lipid-microbiota interactions in milk ecosystems. The current analytical techniques employed for lipidomics analysis include LC–MS, CE-MS, ESI–MS, MALDI-MS, GC–MS, NMR, and shotgun lipidomics (George et al. ; Yue et al. ; Thangaraj et al. ; Li et al. , ). While there are significant advances in milk lipidomics, there are yet some limitations with this mass spectrometry-based approach. Unfortunately, the unavailability of a potent lipid separation system capable of differentiating phospholipids from triacylglycerols and the lack of conventional standards for absolute lipid quantification at the species level are major limitations to the use of lipidomics in milk microbiome research (Liu et al. ; Yue et al. ; Liu and Rochfort ). Milk microbiota across humans and many animal species are now increasingly shown to harbor resistome-related proteins which have a significant effect on mastitis and host health (Piras et al. , ; Warder et al. ; Qin et al. ; Holman et al. ; Rahmeh et al. ). The genes of the resistome in the milk microbiota code for different proteins associated with antimicrobial resistance. These genes are easily acquired and/or exchanged among the members of the microbiota, and their expressions are often induced by antibiotic use or misuse (Baquero et al. ; Hwengwere et al. ). Bottom-up proteomics has been used to identify and characterize resistome proteins in milk across different animal species (Piras et al. , ). Understanding the comprehensive structure of the milk microbiome, including the community (and microbial) protein expression (e.g., resistome proteins) could unravel the full functionality of the community and the pathophysiology of mastitis. The use of bottom-up proteomics and metaproteomics profiling of biological fluids, including milk allows high throughput identification of different proteins, protein fragments, and proteoforms from different microorganisms in a niche (Zhang et al. ). Regardless of their phylogenetic lineages, all host and microbial-associated proteins mediating host-microbiome interactions in health or disease can be identified and quantified (Zhang et al. , , ; Levi Mortera et al. ). In a recent study, a bottom-up proteomics approach unraveled a high abundance of resistome proteins in Podolica cow milk (Piras et al. ). AMR-specific proteins identified in the study included only proteins associated with tetracycline resistance and beta-lactamases. The presence of AMR-specific proteins in milk could originate from bacteria from the environment, teat of the udder, or within the mammary gland (Piras et al. ). Similarly, Piras et al. previously identified 29 AMR-specific proteins/proteoforms, including β-lactamase and Aminoglycoside N(6′)-acetyltransferase in milk samples. The presence of these resistome proteins demonstrate active metabolic activities by microbes expressing these proteins within the milk microbiome. The presence of these specific AMR proteins could provide vital information about the composition, diversity, and structure of the microbial community and associated biomolecules. While metaproteomics provides detailed taxonomical and functional properties of the microbiome, bottom-up proteomics holds great promise in unraveling the distribution of microbial and host-associated proteins that could drive the community structure and functions as well as mastitis pathophysiology. The simultaneous systems-based application of milk-omics approaches (multi-omics) to study milk microbiome is expected to generate new knowledge and open new avenues for a broader, finer, and holistic structural and functional understanding of the milk ecosystem in relation to health and disease (e.g. mastitis). The milk multi-omics approach combines multiple and heterogeneous independent data sets that could arrive at similar and interlinked conclusions with higher confidence supporting specific hypotheses. Each omics approach provides a specific and unique phenotypic and functional perspective of the community. However, by integrating these approaches and datasets through multi-omics, the overall community interactions, dynamics, functionalities, and patterns are unraveled in an unprecedented manner (Chetty and Blekhman ; Jiang et al. ; De Paepe et al. ). While single omics studies have undoubtedly contributed to our current understanding of mastitis, they often provide limited information about mastitis, targeting only a single biological viewpoint which is insufficient to provide system-wide information that is necessary for elucidating the biological footprints and molecular mechanisms driving mastitis pathogenesis (Subramanian et al. ; Wang et al. ). Additionally, integrating multi-omics approaches and multi-dimensional datasets could unravel novel and previously unknown spatiotemporal microbial community relationships and/or interactions, thereby providing a complete, detailed, and multi-layer view of the ecosystem (Xu et al. ; Wang et al. ). The multi-omics approach comprising metagenomics, transcriptomics, proteomics, metabolomics, and lipidomics in addition to culturomics has in recent years been applied in medicine, human diseases, environmental science, and agriculture to study complex ecosystems (e.g. gut) and develop sustainable diagnostic and treatment regimes (Chung and Kang ; Liu et al. ; Snajder et al. ; Tilgam et al. ; Ren et al. ). However, this approach is still growing and yet to be extensively applied in studying mastitis and raw milk microbiome. Recently, some studies have integrated a multi-omics approach to study mastitis at a systemic level (Xu et al. ; Wang et al. ; De Paepe et al. ; Zhang et al. , ). These studies revealed novel insights into the biological and molecular signatures of mastitis thus highlighting the power of the multi-omics approach in deepening our understanding of integrating the milk microbiome signatures in mastitis. Recently, Wang et al. applied a multi-omics approach to study the [epi]genomic signatures and regulatory mechanisms in mastitis by integrating whole genome-wide DNA methylation sequencing data (WGMS), small RNA sequencing data (miRNA), and RNA sequencing data (mRNA and lncRNA) from bovine milk. Their study improved current understanding and identified detailed biological signatures and genetic mechanisms underlying mastitis. Similarly, a combined milk microbiota and metabolomics study (using amplicon sequencing and nuclear magnetic resonance spectroscopy techniques) revealed a strong disturbance of the microbiota (lower diversity and richness) with an altered energy and amino acids metabolism in milk from mastitis host (Zhu et al. ). Correlations between milk microbiota composition, metabolites, and transcriptional profiles showed potential relationships between microbial genera, metabolite biomarkers, and transcriptional patterns of some genes in mastitis milk (Bellassi et al. ; Zhang et al. , ). Bellasi and colleagues (Bellassi et al. ) used a combined metagenomics and metabolomics approach to provide a comprehensive milk-omics landscape of raw cow milk. Their findings showed a significant correlation between the metagenomic profile and some milk metabolites, with Dermabacteraceae, Pseudomonadaceae , and Staphylococcaceae having direct and stronger correlations with those discriminant metabolites. The extensive application of multi-omics approach and data integration in gut microbiome research has shown stability in metabolome and proteome profiles despite perturbations in microbial composition, interestingly depicting functional redundancy that could not have been deciphered using an individual omics approach (Gierse et al. ; Muller et al. ). An integrated multi-omics approach comprising metagenomics, metaproteomics, metatranscriptomics, and metabolomics was used to link microbial community composition and structure to functional signatures and metabolic biomarkers in faecal samples of individuals with type 1 diabetes mellitus and colon cancer (Heintz-Buschart et al. ; Kunath et al. ; Busi et al. ; Bai et al. ). Similarly, integrating a multi-omics approach in milk microbiome research could generate new knowledge, improve the current understanding of the functional and structural signatures in the milk ecosystem, and provide useful insights on mastitis development and mitigating strategies. Current microbiome research integrating the multi-omics approach has the potential of unraveling and attributing deeper functional changes in the overall community biomarkers and gene expression to specific community members or taxa over time and space. Integrating metagenomics, metatranscriptomics, metabolomics, and lipidomics in a system-based multi-omics approach would provide a clearer perspective of the community from genes to phenotype (Aguiar-Pulido et al. ). Despite the huge benefits of the multi-omics approach in milk microbiome research, some challenges persist in its application. Integrating meta-omics data requires significant financial and computational resources and high-level cross-disciplinary expertise. Conducting experimental studies using multiple platforms, techniques, and protocols tends to produce heterogeneous and disparate data sets with varying missing values, reproducibility issues, data availability, scalability and performance of tools, and difficulties in making biological inferences (Couvillion et al. ; Arıkan and Muth ). Multi-omics data generated across multiple platforms of analysis would require efficient integration methods strong enough to overcome differences in resolution and technique-associated biases to draw meaningful biological interpretations. The advances and application of omics technologies have increased the current understanding of the functionality of microbial communities and their impact on health and disease. While single omics studies have undoubtedly contributed to our understanding of milk microbiome and mastitis, they often provide limited information, targeting only a single biological viewpoint which is insufficient to provide system-wide information that is necessary for elucidating the biological footprints and molecular mechanisms driving mastitis pathogenesis. Integrating a multi-omics approach in milk microbiome research could generate new knowledge and improve the current understanding of the structural and functional signatures in the milk ecosystem and also provide insights for the development of mastitis control strategies. Beyond the mechanistic and compositional profiling of the milk microbiome, the use of a multi-omics approach would unravel deeper ecological and functional intricacies in milk which could be used as a model system. Specific milk microbe-microbe and microbe-environment interactions in health and disease across time and space (during lactation) could be explored using a multi-omics approach in future research. This may provide unprecedented insights into the network of interactions between the milk microbiota and the proteome, transcriptome, and metabolome in health and during the cause of mastitis. The use of multi-omics approaches in microbiome research will in the future require the standardization of methods from sampling to experimental protocols and data generation, annotation, integration, and interpretation. The documentation of standardized protocols and workflows will enhance innovation in milk microbiome research. The escalating menace of antimicrobial resistance (AMR) continues to constitute a significant global burden for public health. The abundance and high diversity of bacterial species in [mastitis] milk enhance the rapid accumulation and acquisition of multiple antimicrobial resistance genes, widespread horizontal gene transfer, and short generation times (Hoque et al. ; Zhang et al. , ; Tran and Dahlin ). Numerous studies have over the years characterized microbial structure and composition in mastitis and non-mastitis milk using different omics approaches. However, the overall resistome of the milk and the distribution of specific antibiotic resistance genes has been scarcely studied in milk microbiome. The dissemination of emergent AMR genes and pathogens by the milk could pose health risks to both humans and animals, thus facilitating different infections, including mastitis (Hoque et al. ; Rahmeh et al. ; Shi et al. , ; de Souza et al. ; Rahman et al. ). Mitigating the threat posed by AMR in the milk microbiome requires a deeper understanding of the molecular biomarkers and mechanisms driving AMR emergence and transfer as well as the integration of a multi-omics approach that will provide insight into the complex interactions in the microbiome. In a recent study characterizing the microbial diversity and resistomes in mastitis and healthy cow milk in India’s coastal district of Odisha, a higher number of antimicrobial resistance genes were recorded in mastitis milk as compared to the non-mastitis healthy milk samples (Sahoo et al. ). This was reported in addition to the significantly higher bacterial abundance and diversity in mastitis milk. While a large pool of antimicrobial resistance genes against macrolides, tetracyclines, β-lactams, peptides, and fluoroquinolones were detected in mastitis milk, only a few antimicrobial resistance genes against β-lactams and aminoglycosides were identified in healthy milk samples, respectively (Sahoo et al. ). It is important to note that antimicrobial resistance genes are often ubiquitous in microbiomes, and they have been detected in most ([non]-pathogenic) bacteria within a niche (Tóth et al. ; Samarra et al. ; Sahoo et al. ; Cebeci ). A large-scale omics study of antibiotic resistomes in 2034 milk samples from California, United States showed significantly increased abundance and richness of antimicrobial resistance genes (Liu et al. , ). Specifically, 49 different antimicrobial resistance genes belonging to 15 antimicrobial resistance groups with 7 mechanisms of resistance were found in the milk samples. Most (80%) of the antimicrobial resistance genes were assigned to a bacterial host at the family level. The bacterial families harboring the predominant antimicrobial resistance genes were Pseudomonadaceae , Enterobacteriaceae , Yersiniaceae , and Moraxellaceae (Liu et al. , ). In an attempt to provide insights into the role of the resistome in the severity of bovine clinical mastitis, Hoque et al. identified 2 unique groups consisting of 19 genes responsible for resistance to antibiotics and 11 genes for toxic metal resistance in the microbiome of mastitis milk. This report is in consonance with other findings reported elsewhere from mastitis milk of humans (Patel et al. ; Brinkac et al. ; Baron et al. ; Nhu and Young ), cows (Cheng et al. ; Hoque et al. ; Sharifi et al. ; Li et al. , ; Liu et al. ), and buffalo (Preethirani et al. ; Sun et al. ). These findings suggest that the milk microbiome of the mastitis host constitutes an important reservoir for the acquisition and distribution of AMR. An omics study of bovine colostrum microbiome resistome notably documented an increased diversity in the distribution of AMR genes in buffalo and cow colostrum microbiomes (Yasir et al. ). A total of 175 antimicrobial resistance genes and related variants were identified in cow and buffalo milk, with 55 genes occurring in both groups. The core resistome analysis detected AMR genes that confer resistance against multiple antibiotic classes including sulfonamide, tetracycline, fluoroquinolone, aminoglycoside, and peptide antibiotics (Yasir et al. ). Additionally, these AMR genes were associated with the most common antimicrobial resistance mechanisms including, antibiotic efflux, antibiotic inactivation, and antibiotic target alteration mechanisms. Antimicrobial resistome profiling of milk microbiome with (or without) mastitis using a multi-omics approach may provide valuable insights into (1) identifying potential hotspots and reservoirs for the acquisition and distribution of AMR, (2) public health hazards associated with antimicrobial resistant infections and antibiotic use, (3) microbiome-scale knowledge of intrinsic resistance mechanisms, (4) improve therapeutic schemes optimization of antimicrobial use in the treatment and prevention of mastitis, and (5) developing a sustainable concerted One Health approach in mitigating the global menace of AMR. |
Integrating Shared Decision-Making into Undergraduate Oncology Education: A Pedagogical Framework | 2cf2f510-b8cc-4a0a-81f7-3b99278216f4 | 11219368 | Internal Medicine[mh] | The concept of shared decision-making (SDM) has undergone a significant evolution, mirroring the shifts in the broader healthcare landscape. Historically, medical decision-making was predominantly clinician-driven, with limited patient involvement. This paradigm shifted notably over the past few decades, as patient autonomy and individual rights gained prominence. The ethos of SDM emerged from this transition, advocating for a more egalitarian approach to healthcare, where patient preferences and values are integrated into the decision-making process . In oncology, this shift is particularly salient. The field has witnessed an exponential growth in therapeutic options, ranging from targeted therapies to immunotherapies, each accompanied by its own risk–benefit profile. This burgeoning complexity makes SDM not only desirable but also essential. SDM in oncology respects the patient’s right to be an active participant in their care, acknowledging the profound personal impact of oncological decisions . Despite the acknowledged importance of SDM, there remains a significant gap in its integration into medical education, particularly at the undergraduate level . Traditional medical curricula often remain focused on the biomedical model, with less emphasis on the skills necessary for effective SDM, such as communication, ethical deliberation, and appreciation of patient values. This educational shortfall is increasingly incongruent with the demands of contemporary oncological practice, where patient involvement in decision-making is not just preferred but expected . This paper, therefore, seeks to address this gap. It proposes a comprehensive pedagogical framework that aligns with the current needs of oncological care. By outlining twelve strategic approaches, it aims to embed the principles and practices of SDM into the undergraduate medical curriculum, ensuring that future physicians are not only clinically adept but also proficient in collaboratively navigating the complex decision-making landscape of modern oncology. Systematic Literature Review The literature review was conducted to encompass a broad spectrum of sources relevant to SDM, oncological care, and medical education. The selection criteria for literature included peer-reviewed articles published within the last 15 years, ensuring contemporary relevance. We focused on studies that discussed the implementation of SDM in clinical practice, its integration into medical curricula, and the impact of SDM on patient outcomes in oncology. Databases including PubMed, EMBASE, CINAHL, PsychINFO, and Web of Science were comprehensively searched. Additionally, reference lists of pertinent articles were scrutinized to identify additional relevant literature. The inclusion criteria were stringent, favoring studies that provided empirical evidence or substantial theoretical frameworks regarding SDM in medical education and oncological practice. For a detailed exposition of the search strategy employed, including the specific adaptations made to accommodate the unique indexing and search functionalities of each database, readers are directed to Supplementary information . Pedagogical Alignment Upon identifying strategies from the literature review, the next step involved aligning these strategies with established pedagogical principles. This alignment ensured that the proposed SDM teaching methods were not only evidence-based but also educationally sound and coherent with broader educational objectives. Key pedagogical frameworks considered included experiential learning, reflective practice, adult learning theory, and competency-based education. Each identified SDM strategy was scrutinized and adapted to align with these pedagogical principles, ensuring that they could be seamlessly integrated into existing medical curricula and effectively facilitate the learning objectives of SDM. This dual approach of rigorous literature review and pedagogical alignment culminated in the development of a comprehensive, evidence-based, and pedagogically sound framework for incorporating SDM into undergraduate medical education, particularly in the field of oncology. The literature review was conducted to encompass a broad spectrum of sources relevant to SDM, oncological care, and medical education. The selection criteria for literature included peer-reviewed articles published within the last 15 years, ensuring contemporary relevance. We focused on studies that discussed the implementation of SDM in clinical practice, its integration into medical curricula, and the impact of SDM on patient outcomes in oncology. Databases including PubMed, EMBASE, CINAHL, PsychINFO, and Web of Science were comprehensively searched. Additionally, reference lists of pertinent articles were scrutinized to identify additional relevant literature. The inclusion criteria were stringent, favoring studies that provided empirical evidence or substantial theoretical frameworks regarding SDM in medical education and oncological practice. For a detailed exposition of the search strategy employed, including the specific adaptations made to accommodate the unique indexing and search functionalities of each database, readers are directed to Supplementary information . Upon identifying strategies from the literature review, the next step involved aligning these strategies with established pedagogical principles. This alignment ensured that the proposed SDM teaching methods were not only evidence-based but also educationally sound and coherent with broader educational objectives. Key pedagogical frameworks considered included experiential learning, reflective practice, adult learning theory, and competency-based education. Each identified SDM strategy was scrutinized and adapted to align with these pedagogical principles, ensuring that they could be seamlessly integrated into existing medical curricula and effectively facilitate the learning objectives of SDM. This dual approach of rigorous literature review and pedagogical alignment culminated in the development of a comprehensive, evidence-based, and pedagogically sound framework for incorporating SDM into undergraduate medical education, particularly in the field of oncology. Strategy 1: Ground the Concept of SDM in Real-World Oncology Scenarios To foster a profound understanding of SDM in oncology, it is vital for students to be immersed in clinical settings where SDM is actively practiced . Observing patient consultations offers a practical framework for learning . In these settings, students can observe firsthand how diagnoses, treatment options, and potential outcomes are communicated, aligning clinical evidence with individual patient preferences. In the realm of oncology, where decisions carry significant weight regarding patient outcomes and quality of life, such practical experiences serve to solidify the foundational principles of SDM . It is through this hands-on exposure that students can critically analyze and understand the complexities and importance of integrating clinical evidence with patient values in the decision-making process . Strategy 2: Facilitate Role-Playing Exercises Role-playing remains an effective pedagogical method, especially in contexts that involve interpersonal communication and decision-making . To deepen comprehension of SDM in oncology, students can engage in simulated patient-provider consultations through role-playing. By alternating roles in these simulations, students gain insights into the challenges faced by healthcare providers and patients alike. These exercises foster effective communication, especially in conveying intricate treatment options and their implications . Additionally, they offer a platform for students to cultivate empathy, a critical skill for addressing sensitive discussions inherent to oncology . Through repeated practice in controlled settings, students can better prepare for real-world oncology consultations, ensuring a balanced integration of patient values and clinical evidence in medical decisions. Strategy 3: Dissect Case Studies with Varied Outcomes Case studies serve as invaluable tools in medical education, offering tangible examples that bridge theory with practice . In exploring the complexities of SDM in oncology, it might be beneficial to expose students to diverse case studies, each demonstrating varied outcomes of the decision-making process. Such studies, especially those highlighting the divergence between patient values and standard medical recommendations, offer insight into the layered nature of SDM. Within oncology, where treatment choices influence both life quality and duration, an appreciation for the range of outcomes influenced by SDM becomes evident . Through examination of these cases, students may gain a deeper understanding of the role of patient values and their potential interplay with clinical advice . This approach aims to prompt students to consider the weight of patient values and how they interplay with clinical recommendations, preparing them for the nuanced discussions they will encounter in their professional practice. Strategy 4: Introduce Decision Aids In the innately complex domain of oncology, the decision-making process is often augmented by a suite of decision aids designed to facilitate understanding and discussion . Medical educators should introduce students to these tools, commonly found in the form of risk diagrams, outcome probability charts, and decision trees . These aids aim to make abstract concepts more tangible, allowing patients to visualize potential outcomes, benefits, and risks associated with each treatment option . Training sessions could be designed wherein students learn to employ these aids not merely as passive reference tools but as active instruments for dialog, ensuring that patients comprehend the vast array of information presented to them . By familiarizing themselves with these decision aids, students can foster transparent and productive discussions with patients, anchoring the SDM process in both evidence-based data and individual patient perspectives. Strategy 5: Emphasize the Ethics of SDM in Oncology Oncological care is replete with ethical challenges that arise when aligning medical recommendations with patient preferences . It is crucial for undergraduate medical students to understand these ethical dimensions to navigate potential dilemmas in SDM. As educators, it is essential to guide students in recognizing the nuances between providing hope, setting realistic expectations, and honoring patient autonomy . Incorporating discussions that delve into scenarios where patient desires might diverge from standard medical guidelines can prove insightful . By analyzing these situations, students can explore methods to approach discrepancies between clinical evidence and patient wishes. Ethical considerations also extend to matters of informed consent, treatment discontinuation, and end-of-life choices . Through structured discussions and case analyses, students can be trained to handle these situations with integrity, ensuring decisions are respectful of both medical guidelines and patient values. Strategy 6: Teach Communication Skills Specific to Oncology Effective communication is paramount in oncology, where conversations often involve sensitive topics like prognosis and end-of-life care . Medical students must be proficient in tailored verbal and non-verbal communication techniques for such scenarios . Developing focused training modules can enhance skills in delivering clear information while demonstrating empathy. Students should practice conveying complex medical terminology in understandable terms, ensuring comprehensive comprehension by patients and their families. Proficiency in interpreting non-verbal cues and responding appropriately is essential. Role-play exercises can be valuable in this context, enabling students to simulate real consultations . Constructive feedback mechanisms are crucial, aiding students in adapting their communication styles. This rigorous training can equip students with the capacity to establish trust, foster understanding, and collaborate effectively with patients during the SDM process. Strategy 7: Advocate for Continual Reflection The practice of self-reflection is paramount for medical students to critically assess their understanding and application of SDM in oncology . Encouraging students to introspect regularly allows them to identify biases, challenge pre-existing notions, and refine their approach to SDM . Incorporate structured reflective exercises into the curriculum, such as journaling assignments. These should prompt students to reflect upon their experiences in real-world clinical settings, the challenges they faced, and the decision-making processes they observed or participated in . Through this, they can consolidate their learning experiences and analyze them in the context of theoretical SDM principles . Furthermore, fostering reflective group discussions can be beneficial . Such sessions provide a platform for students to share experiences, gain insights from peers, and collectively evolve their perspectives on SDM. This collaborative form of reflection can aid in highlighting the multifaceted nature of oncological care and emphasize the importance of continual learning in the ever-evolving field of oncology. Strategy 8: Organize Interdisciplinary Collaborations Oncology, by its nature, demands a multifaceted approach to care. Interdisciplinary collaborations are critical for providing comprehensive patient care, and they play a pivotal role in SDM . For medical students, understanding the roles and perspectives of various healthcare professionals is crucial for effective SDM . Incorporating structured interdisciplinary sessions into the curriculum where students can interact with professionals such as nurses, pharmacists, social workers, and others involved in oncological can offer students a broader understanding of the SDM process from varied professional standpoints . It allows them to appreciate the contributions of each discipline and how they interplay in the decision-making process . Such interdisciplinary engagements not only provide diverse perspectives but also emphasize the team-based nature of oncological care . By understanding the different facets of a patient’s care team, students are better prepared to engage in collaborative SDM, ensuring that decisions are well-rounded, considerate of various expert opinions, and are in the best interest of the patient. Strategy 9: Highlight Cultural and Social Sensitivities SDM in oncology is not conducted in a cultural vacuum. Cultural, social, and individual backgrounds play a critical role in shaping patients’ preferences, values, and decisions about their care . To enhance the depth of understanding for medical students, it is essential to expose them to diverse patient populations . By interacting with patients from various backgrounds, students gain insights into how sociocultural factors influence treatment choices . This exposure can be facilitated through case studies, clinical rotations in diverse settings, or structured interactions with patients of varied backgrounds . Furthermore, sessions on cultural competence may be integrated into the curriculum. These sessions should provide students with knowledge and tools to recognize and respect cultural variations in health beliefs, values, and practices . Emphasizing the importance of cultural competence ensures that students recognize the inherent biases that may arise in clinical interactions, thereby striving for more equitable care. By focusing on the cultural and social sensitivities that impact SDM in oncology, students are better equipped to make decisions that are both medically sound and culturally sensitive, ensuring a more holistic approach to patient care. Strategy 10: Integrate Patient Narratives In oncology, where decisions can have profound implications, understanding the patient’s perspective is paramount . One of the most effective ways to convey this perspective to medical students is through direct narratives from patients or their family members. Inviting patients to share their experiences with diagnosis, treatment, and decision-making offers an invaluable perspective . These personal accounts can illuminate the real-world implications of SDM, highlighting both the challenges faced and the benefits realized when patients are active participants in their care decisions . Furthermore, patient narratives can underscore the emotional dimensions of oncological care, which are often not fully captured in clinical case studies or textbooks . They provide students with a firsthand look at the fears, hopes, and uncertainties that patients grapple with, emphasizing the human aspect of medical practice . To optimize the learning from these sessions, it is advisable to follow up with debriefing sessions . Here, students can discuss their takeaways, clarify doubts, and reflect on how these narratives can influence their future practice. By integrating patient narratives into the curriculum, medical students may gain a deeper appreciation for the central role of the patient in SDM, reinforcing the importance of empathy and active listening in clinical practice . Strategy 11: Encourage Evidence-Based Decision-Making Incorporating evidence-based medicine (EBM) into the SDM process is paramount in oncology . While SDM emphasizes patient values and preferences, it is equally crucial for decisions to be grounded in the most recent and relevant evidence. Medical students should be trained to approach clinical situations with a dual lens: one that views the patient’s individual needs and values, and another that scans the existing scientific literature for applicable data . An emphasis should be placed on teaching students the skills to critically evaluate medical research, discerning the quality and relevance of studies . Practical exercises could involve presenting students with a clinical scenario, alongside several research studies, and tasking them with incorporating this evidence into a mock patient consultation. By doing so, they practice weaving together patient values with scientific findings. Moreover, it is important for students to understand that evidence-based recommendations can change over time as new research emerges. Educators should underscore the need for ongoing learning and adaptation in practice, instilling in students a respect for the dynamic nature of medical evidence. This approach not only bolsters the quality of care but also aligns medical interventions closely with patient preferences . Strategy 12: Embrace Digital Tools for SDM The digital landscape offers a promising avenue to enhance SDM in oncology . Medical students, as digital natives, stand at the precipice of integrating technology seamlessly into clinical practice, and educators should foster this . Students should be introduced to the concept that various digital tools can assist in the SDM process. For instance, there are platforms designed to visually represent medical data, making it more digestible for patients . These visualization tools can help bridge the comprehension gap, turning abstract data into meaningful insights, allowing for more informed decisions. Another promising avenue is symptom-tracking applications . While students do not need to be versed in specific apps, they should be aware of the potential such tools offer. Regular digital updates on a patient’s well-being can guide discussions and assist in tailoring personalized treatment plans . It is also paramount for students to understand the ethical considerations surrounding digital tools, especially regarding data security and patient privacy . As they progress in their careers, this foundation will enable them to evaluate and incorporate emerging technologies judiciously into their practice, always keeping the patient’s best interests at the forefront . To foster a profound understanding of SDM in oncology, it is vital for students to be immersed in clinical settings where SDM is actively practiced . Observing patient consultations offers a practical framework for learning . In these settings, students can observe firsthand how diagnoses, treatment options, and potential outcomes are communicated, aligning clinical evidence with individual patient preferences. In the realm of oncology, where decisions carry significant weight regarding patient outcomes and quality of life, such practical experiences serve to solidify the foundational principles of SDM . It is through this hands-on exposure that students can critically analyze and understand the complexities and importance of integrating clinical evidence with patient values in the decision-making process . Role-playing remains an effective pedagogical method, especially in contexts that involve interpersonal communication and decision-making . To deepen comprehension of SDM in oncology, students can engage in simulated patient-provider consultations through role-playing. By alternating roles in these simulations, students gain insights into the challenges faced by healthcare providers and patients alike. These exercises foster effective communication, especially in conveying intricate treatment options and their implications . Additionally, they offer a platform for students to cultivate empathy, a critical skill for addressing sensitive discussions inherent to oncology . Through repeated practice in controlled settings, students can better prepare for real-world oncology consultations, ensuring a balanced integration of patient values and clinical evidence in medical decisions. Case studies serve as invaluable tools in medical education, offering tangible examples that bridge theory with practice . In exploring the complexities of SDM in oncology, it might be beneficial to expose students to diverse case studies, each demonstrating varied outcomes of the decision-making process. Such studies, especially those highlighting the divergence between patient values and standard medical recommendations, offer insight into the layered nature of SDM. Within oncology, where treatment choices influence both life quality and duration, an appreciation for the range of outcomes influenced by SDM becomes evident . Through examination of these cases, students may gain a deeper understanding of the role of patient values and their potential interplay with clinical advice . This approach aims to prompt students to consider the weight of patient values and how they interplay with clinical recommendations, preparing them for the nuanced discussions they will encounter in their professional practice. In the innately complex domain of oncology, the decision-making process is often augmented by a suite of decision aids designed to facilitate understanding and discussion . Medical educators should introduce students to these tools, commonly found in the form of risk diagrams, outcome probability charts, and decision trees . These aids aim to make abstract concepts more tangible, allowing patients to visualize potential outcomes, benefits, and risks associated with each treatment option . Training sessions could be designed wherein students learn to employ these aids not merely as passive reference tools but as active instruments for dialog, ensuring that patients comprehend the vast array of information presented to them . By familiarizing themselves with these decision aids, students can foster transparent and productive discussions with patients, anchoring the SDM process in both evidence-based data and individual patient perspectives. Oncological care is replete with ethical challenges that arise when aligning medical recommendations with patient preferences . It is crucial for undergraduate medical students to understand these ethical dimensions to navigate potential dilemmas in SDM. As educators, it is essential to guide students in recognizing the nuances between providing hope, setting realistic expectations, and honoring patient autonomy . Incorporating discussions that delve into scenarios where patient desires might diverge from standard medical guidelines can prove insightful . By analyzing these situations, students can explore methods to approach discrepancies between clinical evidence and patient wishes. Ethical considerations also extend to matters of informed consent, treatment discontinuation, and end-of-life choices . Through structured discussions and case analyses, students can be trained to handle these situations with integrity, ensuring decisions are respectful of both medical guidelines and patient values. Effective communication is paramount in oncology, where conversations often involve sensitive topics like prognosis and end-of-life care . Medical students must be proficient in tailored verbal and non-verbal communication techniques for such scenarios . Developing focused training modules can enhance skills in delivering clear information while demonstrating empathy. Students should practice conveying complex medical terminology in understandable terms, ensuring comprehensive comprehension by patients and their families. Proficiency in interpreting non-verbal cues and responding appropriately is essential. Role-play exercises can be valuable in this context, enabling students to simulate real consultations . Constructive feedback mechanisms are crucial, aiding students in adapting their communication styles. This rigorous training can equip students with the capacity to establish trust, foster understanding, and collaborate effectively with patients during the SDM process. The practice of self-reflection is paramount for medical students to critically assess their understanding and application of SDM in oncology . Encouraging students to introspect regularly allows them to identify biases, challenge pre-existing notions, and refine their approach to SDM . Incorporate structured reflective exercises into the curriculum, such as journaling assignments. These should prompt students to reflect upon their experiences in real-world clinical settings, the challenges they faced, and the decision-making processes they observed or participated in . Through this, they can consolidate their learning experiences and analyze them in the context of theoretical SDM principles . Furthermore, fostering reflective group discussions can be beneficial . Such sessions provide a platform for students to share experiences, gain insights from peers, and collectively evolve their perspectives on SDM. This collaborative form of reflection can aid in highlighting the multifaceted nature of oncological care and emphasize the importance of continual learning in the ever-evolving field of oncology. Oncology, by its nature, demands a multifaceted approach to care. Interdisciplinary collaborations are critical for providing comprehensive patient care, and they play a pivotal role in SDM . For medical students, understanding the roles and perspectives of various healthcare professionals is crucial for effective SDM . Incorporating structured interdisciplinary sessions into the curriculum where students can interact with professionals such as nurses, pharmacists, social workers, and others involved in oncological can offer students a broader understanding of the SDM process from varied professional standpoints . It allows them to appreciate the contributions of each discipline and how they interplay in the decision-making process . Such interdisciplinary engagements not only provide diverse perspectives but also emphasize the team-based nature of oncological care . By understanding the different facets of a patient’s care team, students are better prepared to engage in collaborative SDM, ensuring that decisions are well-rounded, considerate of various expert opinions, and are in the best interest of the patient. SDM in oncology is not conducted in a cultural vacuum. Cultural, social, and individual backgrounds play a critical role in shaping patients’ preferences, values, and decisions about their care . To enhance the depth of understanding for medical students, it is essential to expose them to diverse patient populations . By interacting with patients from various backgrounds, students gain insights into how sociocultural factors influence treatment choices . This exposure can be facilitated through case studies, clinical rotations in diverse settings, or structured interactions with patients of varied backgrounds . Furthermore, sessions on cultural competence may be integrated into the curriculum. These sessions should provide students with knowledge and tools to recognize and respect cultural variations in health beliefs, values, and practices . Emphasizing the importance of cultural competence ensures that students recognize the inherent biases that may arise in clinical interactions, thereby striving for more equitable care. By focusing on the cultural and social sensitivities that impact SDM in oncology, students are better equipped to make decisions that are both medically sound and culturally sensitive, ensuring a more holistic approach to patient care. In oncology, where decisions can have profound implications, understanding the patient’s perspective is paramount . One of the most effective ways to convey this perspective to medical students is through direct narratives from patients or their family members. Inviting patients to share their experiences with diagnosis, treatment, and decision-making offers an invaluable perspective . These personal accounts can illuminate the real-world implications of SDM, highlighting both the challenges faced and the benefits realized when patients are active participants in their care decisions . Furthermore, patient narratives can underscore the emotional dimensions of oncological care, which are often not fully captured in clinical case studies or textbooks . They provide students with a firsthand look at the fears, hopes, and uncertainties that patients grapple with, emphasizing the human aspect of medical practice . To optimize the learning from these sessions, it is advisable to follow up with debriefing sessions . Here, students can discuss their takeaways, clarify doubts, and reflect on how these narratives can influence their future practice. By integrating patient narratives into the curriculum, medical students may gain a deeper appreciation for the central role of the patient in SDM, reinforcing the importance of empathy and active listening in clinical practice . Incorporating evidence-based medicine (EBM) into the SDM process is paramount in oncology . While SDM emphasizes patient values and preferences, it is equally crucial for decisions to be grounded in the most recent and relevant evidence. Medical students should be trained to approach clinical situations with a dual lens: one that views the patient’s individual needs and values, and another that scans the existing scientific literature for applicable data . An emphasis should be placed on teaching students the skills to critically evaluate medical research, discerning the quality and relevance of studies . Practical exercises could involve presenting students with a clinical scenario, alongside several research studies, and tasking them with incorporating this evidence into a mock patient consultation. By doing so, they practice weaving together patient values with scientific findings. Moreover, it is important for students to understand that evidence-based recommendations can change over time as new research emerges. Educators should underscore the need for ongoing learning and adaptation in practice, instilling in students a respect for the dynamic nature of medical evidence. This approach not only bolsters the quality of care but also aligns medical interventions closely with patient preferences . The digital landscape offers a promising avenue to enhance SDM in oncology . Medical students, as digital natives, stand at the precipice of integrating technology seamlessly into clinical practice, and educators should foster this . Students should be introduced to the concept that various digital tools can assist in the SDM process. For instance, there are platforms designed to visually represent medical data, making it more digestible for patients . These visualization tools can help bridge the comprehension gap, turning abstract data into meaningful insights, allowing for more informed decisions. Another promising avenue is symptom-tracking applications . While students do not need to be versed in specific apps, they should be aware of the potential such tools offer. Regular digital updates on a patient’s well-being can guide discussions and assist in tailoring personalized treatment plans . It is also paramount for students to understand the ethical considerations surrounding digital tools, especially regarding data security and patient privacy . As they progress in their careers, this foundation will enable them to evaluate and incorporate emerging technologies judiciously into their practice, always keeping the patient’s best interests at the forefront . The strategies presented in this manuscript offer a comprehensive approach to integrating SDM practices into undergraduate medical education, specifically within the context of oncology. While these strategies provide a robust framework, there are certain challenges and considerations that merit discussion to ensure the successful implementation and enhancement of SDM education. Challenges and Limitations The incorporation of SDM into medical education faces several challenges, including constraints on curricular time, competing educational priorities, and potential resistance stemming from entrenched traditional teaching methodologies. These hurdles necessitate a proactive and strategic approach to ensure successful integration. Key to overcoming these obstacles is the development of workshops and training sessions specifically designed for educators. These sessions should aim to not only acquaint educators with the SDM strategies but also to highlight their significance and positive impact on patient care. Furthermore, tailoring these strategies to suit the specific resources and time limitations of each educational institution is critical for enhancing the practicality and effectiveness of their implementation. Recognizing and actively addressing these challenges with pragmatic solutions is essential to create a conducive environment for the seamless integration of SDM into the medical curriculum. Cultural and Social Sensitivities SDM in medical practice is not solely based on medical evidence and individual patient preferences; it also necessitates a keen awareness of cultural and social nuances. The influence of a patient’s cultural background, beliefs, and social milieu significantly shapes medical decision-making . Educational exposure to a diverse array of patient populations through case studies and clinical interactions is vital for medical students to comprehend the impact of these sociocultural factors on treatment decisions. Training in cultural competence is imperative to provide students with the necessary skills to adeptly navigate these multifaceted aspects, thereby promoting respectful and equitable patient-centered care . By integrating cultural and social considerations into the SDM process, medical students are better equipped to participate in meaningful decision-making discussions, effectively acknowledging and respecting individual patient values within the rich tapestry of cultural diversity . Assessment and Evaluation The effectiveness of the implemented SDM strategies within the educational framework can be appraised through a variety of evaluative measures. Objective assessment instruments, such as standardized patient encounters, are pivotal for gauging students’ proficiency in applying SDM principles in practical clinical contexts . Additionally, self-assessment methodologies offer students the opportunity for introspection, enabling them to critically assess their progression in comprehending and implementing SDM concepts throughout their educational journey . Furthermore, soliciting feedback from both students and educators regarding the perceived efficacy of these strategies is crucial for the iterative refinement of the educational approach. Continuous evaluation is essential to ensure that the learning objectives are met and that the strategies adaptively align with the dynamic requirements of medical education and evolving patient care paradigms. Holistic Approach to Decision-Making SDM extends beyond basic communication skills, necessitating a comprehensive and integrative approach to medical decision-making. The strategies proposed in this paper advocate for an educational model that transcends traditional clinical evidence and patient preferences, incorporating ethical considerations, interdisciplinary collaboration, patient narratives, and the pragmatic use of digital tools. This amalgamation aims to equip medical students with the capability to proficiently address the diverse and complex aspects of oncological care, including its emotional, ethical, and technological dimensions. By adopting this multifaceted approach, the framework aspires to enhance the overall quality of patient care and to prepare future medical practitioners for the intricate and evolving landscape of contemporary medicine, albeit with an understanding of the inherent challenges and limitations of such an integrative approach. The incorporation of SDM into medical education faces several challenges, including constraints on curricular time, competing educational priorities, and potential resistance stemming from entrenched traditional teaching methodologies. These hurdles necessitate a proactive and strategic approach to ensure successful integration. Key to overcoming these obstacles is the development of workshops and training sessions specifically designed for educators. These sessions should aim to not only acquaint educators with the SDM strategies but also to highlight their significance and positive impact on patient care. Furthermore, tailoring these strategies to suit the specific resources and time limitations of each educational institution is critical for enhancing the practicality and effectiveness of their implementation. Recognizing and actively addressing these challenges with pragmatic solutions is essential to create a conducive environment for the seamless integration of SDM into the medical curriculum. SDM in medical practice is not solely based on medical evidence and individual patient preferences; it also necessitates a keen awareness of cultural and social nuances. The influence of a patient’s cultural background, beliefs, and social milieu significantly shapes medical decision-making . Educational exposure to a diverse array of patient populations through case studies and clinical interactions is vital for medical students to comprehend the impact of these sociocultural factors on treatment decisions. Training in cultural competence is imperative to provide students with the necessary skills to adeptly navigate these multifaceted aspects, thereby promoting respectful and equitable patient-centered care . By integrating cultural and social considerations into the SDM process, medical students are better equipped to participate in meaningful decision-making discussions, effectively acknowledging and respecting individual patient values within the rich tapestry of cultural diversity . The effectiveness of the implemented SDM strategies within the educational framework can be appraised through a variety of evaluative measures. Objective assessment instruments, such as standardized patient encounters, are pivotal for gauging students’ proficiency in applying SDM principles in practical clinical contexts . Additionally, self-assessment methodologies offer students the opportunity for introspection, enabling them to critically assess their progression in comprehending and implementing SDM concepts throughout their educational journey . Furthermore, soliciting feedback from both students and educators regarding the perceived efficacy of these strategies is crucial for the iterative refinement of the educational approach. Continuous evaluation is essential to ensure that the learning objectives are met and that the strategies adaptively align with the dynamic requirements of medical education and evolving patient care paradigms. SDM extends beyond basic communication skills, necessitating a comprehensive and integrative approach to medical decision-making. The strategies proposed in this paper advocate for an educational model that transcends traditional clinical evidence and patient preferences, incorporating ethical considerations, interdisciplinary collaboration, patient narratives, and the pragmatic use of digital tools. This amalgamation aims to equip medical students with the capability to proficiently address the diverse and complex aspects of oncological care, including its emotional, ethical, and technological dimensions. By adopting this multifaceted approach, the framework aspires to enhance the overall quality of patient care and to prepare future medical practitioners for the intricate and evolving landscape of contemporary medicine, albeit with an understanding of the inherent challenges and limitations of such an integrative approach. In synthesizing the necessity for embedding SDM within undergraduate oncology education, this paper presents a framework that aligns with the evolving dynamics of medical practice, particularly in the realm of oncology. The strategies outlined herein propose a methodological shift in medical education, moving toward a model that balances clinical expertise with patient-centered decision-making. This framework, derived from a systematic literature review and attuned to established pedagogical theories, recognizes the intricacies of oncological care and the heterogeneity of patient encounters. Strategies such as the incorporation of real-world clinical experiences, simulated patient interactions, analytical case studies, and the use of decision aids are posited as essential components in cultivating a deeper understanding of SDM. Furthermore, the emphasis on communication skills, reflective practice, interdisciplinary collaboration, cultural competence, and digital literacy aims to equip students with a diverse skill set necessary for contemporary oncological practice. The transition toward a curriculum that emphasizes patient autonomy and collaborative decision-making mirrors the broader shift in healthcare toward more patient-inclusive models. This is particularly relevant in oncology, where treatment options and patient preferences are notably complex. However, the implementation of these strategies requires careful consideration and adaptation to the specific contexts of educational institutions. The proposed framework, while comprehensive, is not without its limitations and should be viewed as a starting point for further development and customization. Educators and practitioners in the field of medicine carry the responsibility of shaping future medical practice. This framework represents a considered approach to this responsibility, acknowledging the evolving nature of medical science and the importance of patient-physician relationships. However, it must be noted that the integration of these strategies into medical curricula requires ongoing evaluation and adaptation to ensure their effectiveness and relevance. In summary, the integration of SDM into the undergraduate oncology curriculum is a critical step toward enhancing patient-centered care in medical practice. This manuscript offers a foundational framework for this integration, providing a structured yet adaptable approach. It is imperative that medical educators and institutions approach the adoption of these strategies with a discerning perspective, ensuring that they are effectively integrated into the fabric of medical education and practice. Below is the link to the electronic supplementary material. Supplementary File 1 (DOCX 20 KB) |
Water
Activity as an Indicator for Antibody Storage
Stability in Lyophilized Formulations | b302e649-2b84-4378-9c07-e146be40def2 | 11795528 | Pharmacology[mh] | Introduction Lyophilization, or freeze-drying, still is one of the gold standards in the storage and preservation of sensitive biopharmaceutics, e.g., monoclonal antibodies (mAb) or viral vectors. Stabilization within lyophilized biopharmaceutical formulations during drying and storage is achieved by the addition of excipients or excipient combinations and adjustment of the dry matrix to a defined residual moisture. The selection of excipients, and the decision on the residual moisture after drying, is usually based on empirical rules and personal expertise. − Generally accepted, two stabilization mechanisms govern the (long-term) stability of biopharmaceuticals and biologics after lyophilization. 1 The vitrification theory attributes the stabilization of biopharmaceutics in lyophilized formulations to kinetic inhibition of molecular interactions. High viscosities in the lyophilized formulation slow down molecular interactions and thus delay degradation reactions and prolong shelf life. Specifically important in this context are the glass-transition temperature T g and relaxation phenomena within the lyophilized formulation. − 2 The water-replacement theory attributes the stabilization of biopharmaceutics in lyophilized formulations to molecular interactions in the formulation matrix. The excipients reinstitute interactions with the biopharmaceutics formerly being governed by water molecules in the liquid (solution) formulation. Specifically, properties like the hydrogen bonding between excipients and biopharmaceutics can be connected to this stabilization principle. − Many former investigations have shown that stabilization of biopharmaceuticals is not exclusively explained by one of the two mechanisms. For example, macroscopic (cake) stability can be compromised by an insufficiently high glass-transition temperature eventually even leading to collapse of the amorphous cake, although the excipients present are proven to be beneficial regarding their water-replacement properties. , Vice versa, a high glass-transition temperature and thus low reaction rates and slow relaxations would not necessarily stabilize the biopharmaceutical in the lyophilized formulation when water replacement would be provided insufficiently. , In order to design a stabilizing formulation for biopharmaceuticals, attention must, therefore, be paid to the interplay of both stabilization mechanisms. Indicators for the stabilization of biopharmaceuticals via vitrification are high frequency β-relaxations in the lyophilizate. It was shown that the mean-squared displacement of hydrogen atoms is correlated with the chemical degradation rate. A possible indicator for water replacement is hydrogen bonds being reinstitated by excipients depending on the molar/mass ratio of excipient/biopharmaceutical. Advantageous ratios for the hydrogen bonding and the stabilization of biopharmaceuticals have been identified experimentally for various excipients. Generally, interexcipient differences cannot be accounted for, and no general statement is possible. , , Aside from these sole indicators, reliable methods such as the ReFOLD assay for proteins published by Svilenov and Winter that provide efficient and rapid information on stabilizing capabilities of different formulations have been developed lately. Although offering valuable information into the stabilizing capabilities of lyophilized formulations, the disadvantage of all mentioned methods and indicators is that each formulation has to be analyzed separately (high-throughput screening experiments). It is not possible to make predictive statements about potential stabilization capabilities, especially at different residual moisture values or for new combinations of excipients. Aside from the stabilization potential of certain excipients and excipient combinations, the impact of residual moisture on the (long-term) formulation stability is often considered only a side note. Heuristics often suggest that, depending on the excipients used, drying is typically carried out to below 3 w %, preferably 2% water/residual moisture. This is meant to ensure that the glass-transition temperature of the lyophilizate is sufficiently high to achieve low/zero molecular mobility in the lyophilized solution. It has been controversially discussed for many years but has now been confirmed that lyophilized formulations can be “over-dried” and, as a result, would then provide poor protein stabilization. As a rule of thumb, water concentrations of well below 0.1 wt % should therefore be avoided. However, no universal beneficial residual moisture level for formulations was identified yet. In general, predictive and reliable strategies for the design of lyophilized formulations are highly desirable, aside from classical pharma-proteins such as monoclonal antibodies, especially in the context of evolving biologics entities such as, e.g., vaccines and viral vectors. , Within this work, we thus propose a novel thermodynamics-based design strategy, combining water activity a w and T g as determinants for a quantitative and holistic access to the (long-term) stability in lyophilized formulations. Water activity herein can be regarded as a measure/indicator for water availability both in the initial liquid formulation as well as the lyophilizate after freeze-drying. This free unbound water, which is not bound to any surfaces or occupied in molecular interaction/chemical bounds, , can participate in the stabilization of biopharmaceutics. Water activity is known as a crucial factor influencing the degradation of antibodies and the chemical stability of the amorphous phase in which the antibody is preserved. Water activity is defined as the product of the water activity coefficient γ w and the water mole fraction x w (which is identical with residual moisture) in the liquid/amorphous/dried phase. 1 The water activity coefficient γ w is a measure for the (molecular) interactions of water with its surrounding molecules and thus nonidealities in solution. It is highly affected by the excipients used, as well as their concentrations. Water activity therefore contains combined information on the impact of excipients and excipient combinations in the amorphous phase as well as the effect of residual moisture in the amorphous phase. If aiming for a certain value of water activity, the only levers are the water activity coefficient, which is influenced by the type and concentration of excipients, as well as the residual moisture. Calculation of the water activity coefficients can easily be achieved using the equation of state perturbed-chain statistical association fluid theory (PC-SAFT). , The second determinant considered in our approach is T g , which is also influenced by the excipient choice and the residual moisture. Following the vitrification theory, a lyophilized formulation must have a glass-transition temperature T g that is much higher than the storage temperature to be kinetically frozen, which dramatically slows down degradation reaction kinetics and molecular mobility. This also prevents mechanical collapse of the lyophilized formulation to take place during storage, preserving a macroscopically appealing product. Within our design strategy, we use the Gordon–Taylor approach ( , written for ternary systems) to estimate/predict T g of the respective excipients/excipient combinations as a function of the residual moisture. Herein, the glass-transition temperature T g is estimated using the pure components’ glass-transition temperatures T g,i and the respective weight fractions w i of the three components, that is, water and two excipients. The two Gordon–Taylor constants k 1 and k 2 are fitted to the glass-transition temperatures of the respective binary excipient/water system. 2 As both values, water activity a w and T g , are thus coupled by residual moisture and excipient type used, finding a formulation that exhibits promising water activity and kinetic stabilization through a sufficiently high T g requires simultaneous optimization. In the first step, the water activity coefficient of the excipient composition was optimized and maximized to allow the widest possible design range for the residual moisture. Subsequently, Gordon–Taylor was used to calculate the range of residual moisture that enabled kinetic stabilization. This approach was tested on a sucrose/ectoine model system. Compositions that were predicted by this approach to be advantageous were identified, and their selection was validated with regular stability studies. Materials and Methods 2.1 Chemicals Sucrose with a purity of ≥99.5%, dl -proline with a purity of ≥99%, and l -arginine with a purity of ≥99 were purchased from Sigma-Aldrich Co. LLC (Hamburg, Germany). l-Histidine monohydrochloride monohydrate with a purity of ≥99% was purchased from Thermo Fisher Scientific (Darmstadt, Germany). Polysorbate 20 was purchased from Croda Inc. (Snaith, UK). Ectoine with a purity of ≥98.5% was provided from bitop AG (Witten, Germany). Water from a Sartorius purification system was used to prepare the samples for analysis. The studied IgG 1 antibody had a size of around 145 kDa. 2.2 Stability Testing After the water activity of formulations from the literature was modeled using PC-SAFT and correlated with the corresponding stability data, this correlation was expanded and consolidated with further stability studies. For this purpose, advantageous excipient combinations were identified and subjected to stability studies. 2.2.1 Preparation and Lyophilization Before use, the antibody solution was purified with an KTA purification system using a Sepharose HiTrap SP column from Cytiva (Marlborough, USA). The buffer was exchanged with a 20 mM histidine buffer at pH 5.5 and 0.2% PS20 using a Minimate crossflow filtration unit with a poly(ether sulfone) (PES) membrane with a molecular cutoff of 30 kDa from Pall Corporation (New York, USA). To achieve the desired antibody concentrations of 10 mg/mL, a NanoDrop 2000 UV photometer from Thermo Fisher Scientific (Waltham, USA) was used, and the antibody solution was diluted accordingly. Excipient stock solutions were prepared using the same histidine buffer as for the antibody stock solution. Prior to mixing the various stock solutions, filtration using a 0.22 μm PES Sartolab RF vacuum filter unit from Sartorius (Göttingen, Germany) was performed for all stock solutions. To prevent moisture ingress from the used vials and stoppers during storage, the 2 R vials from MGlas (Münnerstadt, Germany) and 13 mm Lyo Nova Pure RS 1356 4023/50 G stoppers from West Pharmaceutical Services (Paxton, USA) were dried at 105 °C for 6 h and at 80 °C for 8 h, respectively. Stock solutions were mixed to achieve desired excipient concentrations; see . After that, 1 mL of formulation was filled into the vials and semistoppered. Two lines of buffer-filled vials were placed on the lyophilization rack as a radiation shield around the sample vials. The prepared formulations were lyophilized following the program listed in . For formulations containing arginine/proline, lyophilization was performed with and without an additional annealing step in an Epsilon 2-6D LCSplus by Martin-Christ (Osterode am Harz, Germany). Remaining formulations were dried using an FTS LyoStar from SP Scientific (Stone Ridge, USA). After lyophilization, the vials were stoppered at 600 mbar and crimped with flip-off seals. Samples were stored/stressed at 25 and 40 °C, and analysis was performed after drying and after 9 months 2.2.2 Karl Fischer Titration Residual moisture after lyophilization was determined with Karl Fischer headspace titration. An AQUA 40.00 titrator from ECH Elektrochemie Halle GmbH (Halle, Germany) was used. A sample mass of around 20 mg was prepared under a dry atmosphere with <10% relative humidity. Water in the samples was then evaporated and transferred to the titration chamber for moisture content determination. 2.2.3 Differential Scanning Calorimetry Glass-transition temperatures were determined using difference scanning calorimetry (DSC) using a Q2000 DSC with an attached RCS90 temperature control unit from TA Instruments (Eschborn, Germany). Between 5 and 10 mg of lyophilized sample was hermetically sealed in aluminum pans and heated from 0 to 200 °C with a heating ramp of 5 K/min. T g was interpreted as the infliction point of the heat flow using the software TA universal analysis. 2.2.4 Powder X-ray Diffraction To ensure the maintained amorphicity of the lyophilized samples, powder X-ray diffraction (PXRD) using a Miniflex 600 from Rigaku (Tokyo, Japan) with a Cu Kα anode in reflection mode with a tube voltage of 40 kV and a current of 15 mA was used. The scanning rate was 5° 2θ/min from 5° to 35° 2θ. 2.2.5 Size-Exclusion Chromatography The monomer content of the antibody formulations was determined by using a Q1260 Infinity II Quaternary size-exclusion chromatography system (SEC) from Agilent Technologies (Santa Clara, USA). It includes pumping, degassing (G7111B), autosampling (G7129), UV–vis absorption (G7115A), RI-refraction (G7162A), and light scattering using the miniDAWN from Wyatt Technology Corporation (Santa Barbara, USA). For separation, a SEC Superdex 200 Increase 10/300 GL column from Cytiva (Marlborough, USA) was used. UV analysis was performed at a 280 nm wavelength. Prior to analysis, lyophilized samples were reconstituted with 910 μL of purified water. Samples were then centrifuged at 10,000 rpm for 10 min using a 5425 centrifuge from Eppendorf (Hamburg, Germany). For SEC analysis, 10 μL was injected. Mobile phase was a 50 mM phosphate buffer at pH 7 with a flow rate of 1 mL/min. Astra V.7.3.2 from Wyatt Technology Corporation (Santa Barbara, USA) was used to determine the mass fractions of antibody monomer. From the antibody monomer content, antibody (mAb) monomer retention was calculated. 3 2.2.6 Nano Differential Scanning Fluorimetry Thermal stability was analyzed using nano differential scanning fluorimetry (nanoDSF) using the Prometheus NT.48 from NanoTemper (Munich, Germany). Reconstituted samples (see ) were put into capillaries and heated from 20 to 90 °C with a heating ramp of 2 K/min. The fluorescence ratio F350/F330 was used to determine the unfolding temperature T unfold and the aggregation temperature T agg . Chemicals Sucrose with a purity of ≥99.5%, dl -proline with a purity of ≥99%, and l -arginine with a purity of ≥99 were purchased from Sigma-Aldrich Co. LLC (Hamburg, Germany). l-Histidine monohydrochloride monohydrate with a purity of ≥99% was purchased from Thermo Fisher Scientific (Darmstadt, Germany). Polysorbate 20 was purchased from Croda Inc. (Snaith, UK). Ectoine with a purity of ≥98.5% was provided from bitop AG (Witten, Germany). Water from a Sartorius purification system was used to prepare the samples for analysis. The studied IgG 1 antibody had a size of around 145 kDa. Stability Testing After the water activity of formulations from the literature was modeled using PC-SAFT and correlated with the corresponding stability data, this correlation was expanded and consolidated with further stability studies. For this purpose, advantageous excipient combinations were identified and subjected to stability studies. 2.2.1 Preparation and Lyophilization Before use, the antibody solution was purified with an KTA purification system using a Sepharose HiTrap SP column from Cytiva (Marlborough, USA). The buffer was exchanged with a 20 mM histidine buffer at pH 5.5 and 0.2% PS20 using a Minimate crossflow filtration unit with a poly(ether sulfone) (PES) membrane with a molecular cutoff of 30 kDa from Pall Corporation (New York, USA). To achieve the desired antibody concentrations of 10 mg/mL, a NanoDrop 2000 UV photometer from Thermo Fisher Scientific (Waltham, USA) was used, and the antibody solution was diluted accordingly. Excipient stock solutions were prepared using the same histidine buffer as for the antibody stock solution. Prior to mixing the various stock solutions, filtration using a 0.22 μm PES Sartolab RF vacuum filter unit from Sartorius (Göttingen, Germany) was performed for all stock solutions. To prevent moisture ingress from the used vials and stoppers during storage, the 2 R vials from MGlas (Münnerstadt, Germany) and 13 mm Lyo Nova Pure RS 1356 4023/50 G stoppers from West Pharmaceutical Services (Paxton, USA) were dried at 105 °C for 6 h and at 80 °C for 8 h, respectively. Stock solutions were mixed to achieve desired excipient concentrations; see . After that, 1 mL of formulation was filled into the vials and semistoppered. Two lines of buffer-filled vials were placed on the lyophilization rack as a radiation shield around the sample vials. The prepared formulations were lyophilized following the program listed in . For formulations containing arginine/proline, lyophilization was performed with and without an additional annealing step in an Epsilon 2-6D LCSplus by Martin-Christ (Osterode am Harz, Germany). Remaining formulations were dried using an FTS LyoStar from SP Scientific (Stone Ridge, USA). After lyophilization, the vials were stoppered at 600 mbar and crimped with flip-off seals. Samples were stored/stressed at 25 and 40 °C, and analysis was performed after drying and after 9 months 2.2.2 Karl Fischer Titration Residual moisture after lyophilization was determined with Karl Fischer headspace titration. An AQUA 40.00 titrator from ECH Elektrochemie Halle GmbH (Halle, Germany) was used. A sample mass of around 20 mg was prepared under a dry atmosphere with <10% relative humidity. Water in the samples was then evaporated and transferred to the titration chamber for moisture content determination. 2.2.3 Differential Scanning Calorimetry Glass-transition temperatures were determined using difference scanning calorimetry (DSC) using a Q2000 DSC with an attached RCS90 temperature control unit from TA Instruments (Eschborn, Germany). Between 5 and 10 mg of lyophilized sample was hermetically sealed in aluminum pans and heated from 0 to 200 °C with a heating ramp of 5 K/min. T g was interpreted as the infliction point of the heat flow using the software TA universal analysis. 2.2.4 Powder X-ray Diffraction To ensure the maintained amorphicity of the lyophilized samples, powder X-ray diffraction (PXRD) using a Miniflex 600 from Rigaku (Tokyo, Japan) with a Cu Kα anode in reflection mode with a tube voltage of 40 kV and a current of 15 mA was used. The scanning rate was 5° 2θ/min from 5° to 35° 2θ. 2.2.5 Size-Exclusion Chromatography The monomer content of the antibody formulations was determined by using a Q1260 Infinity II Quaternary size-exclusion chromatography system (SEC) from Agilent Technologies (Santa Clara, USA). It includes pumping, degassing (G7111B), autosampling (G7129), UV–vis absorption (G7115A), RI-refraction (G7162A), and light scattering using the miniDAWN from Wyatt Technology Corporation (Santa Barbara, USA). For separation, a SEC Superdex 200 Increase 10/300 GL column from Cytiva (Marlborough, USA) was used. UV analysis was performed at a 280 nm wavelength. Prior to analysis, lyophilized samples were reconstituted with 910 μL of purified water. Samples were then centrifuged at 10,000 rpm for 10 min using a 5425 centrifuge from Eppendorf (Hamburg, Germany). For SEC analysis, 10 μL was injected. Mobile phase was a 50 mM phosphate buffer at pH 7 with a flow rate of 1 mL/min. Astra V.7.3.2 from Wyatt Technology Corporation (Santa Barbara, USA) was used to determine the mass fractions of antibody monomer. From the antibody monomer content, antibody (mAb) monomer retention was calculated. 3 2.2.6 Nano Differential Scanning Fluorimetry Thermal stability was analyzed using nano differential scanning fluorimetry (nanoDSF) using the Prometheus NT.48 from NanoTemper (Munich, Germany). Reconstituted samples (see ) were put into capillaries and heated from 20 to 90 °C with a heating ramp of 2 K/min. The fluorescence ratio F350/F330 was used to determine the unfolding temperature T unfold and the aggregation temperature T agg . Preparation and Lyophilization Before use, the antibody solution was purified with an KTA purification system using a Sepharose HiTrap SP column from Cytiva (Marlborough, USA). The buffer was exchanged with a 20 mM histidine buffer at pH 5.5 and 0.2% PS20 using a Minimate crossflow filtration unit with a poly(ether sulfone) (PES) membrane with a molecular cutoff of 30 kDa from Pall Corporation (New York, USA). To achieve the desired antibody concentrations of 10 mg/mL, a NanoDrop 2000 UV photometer from Thermo Fisher Scientific (Waltham, USA) was used, and the antibody solution was diluted accordingly. Excipient stock solutions were prepared using the same histidine buffer as for the antibody stock solution. Prior to mixing the various stock solutions, filtration using a 0.22 μm PES Sartolab RF vacuum filter unit from Sartorius (Göttingen, Germany) was performed for all stock solutions. To prevent moisture ingress from the used vials and stoppers during storage, the 2 R vials from MGlas (Münnerstadt, Germany) and 13 mm Lyo Nova Pure RS 1356 4023/50 G stoppers from West Pharmaceutical Services (Paxton, USA) were dried at 105 °C for 6 h and at 80 °C for 8 h, respectively. Stock solutions were mixed to achieve desired excipient concentrations; see . After that, 1 mL of formulation was filled into the vials and semistoppered. Two lines of buffer-filled vials were placed on the lyophilization rack as a radiation shield around the sample vials. The prepared formulations were lyophilized following the program listed in . For formulations containing arginine/proline, lyophilization was performed with and without an additional annealing step in an Epsilon 2-6D LCSplus by Martin-Christ (Osterode am Harz, Germany). Remaining formulations were dried using an FTS LyoStar from SP Scientific (Stone Ridge, USA). After lyophilization, the vials were stoppered at 600 mbar and crimped with flip-off seals. Samples were stored/stressed at 25 and 40 °C, and analysis was performed after drying and after 9 months Karl Fischer Titration Residual moisture after lyophilization was determined with Karl Fischer headspace titration. An AQUA 40.00 titrator from ECH Elektrochemie Halle GmbH (Halle, Germany) was used. A sample mass of around 20 mg was prepared under a dry atmosphere with <10% relative humidity. Water in the samples was then evaporated and transferred to the titration chamber for moisture content determination. Differential Scanning Calorimetry Glass-transition temperatures were determined using difference scanning calorimetry (DSC) using a Q2000 DSC with an attached RCS90 temperature control unit from TA Instruments (Eschborn, Germany). Between 5 and 10 mg of lyophilized sample was hermetically sealed in aluminum pans and heated from 0 to 200 °C with a heating ramp of 5 K/min. T g was interpreted as the infliction point of the heat flow using the software TA universal analysis. Powder X-ray Diffraction To ensure the maintained amorphicity of the lyophilized samples, powder X-ray diffraction (PXRD) using a Miniflex 600 from Rigaku (Tokyo, Japan) with a Cu Kα anode in reflection mode with a tube voltage of 40 kV and a current of 15 mA was used. The scanning rate was 5° 2θ/min from 5° to 35° 2θ. Size-Exclusion Chromatography The monomer content of the antibody formulations was determined by using a Q1260 Infinity II Quaternary size-exclusion chromatography system (SEC) from Agilent Technologies (Santa Clara, USA). It includes pumping, degassing (G7111B), autosampling (G7129), UV–vis absorption (G7115A), RI-refraction (G7162A), and light scattering using the miniDAWN from Wyatt Technology Corporation (Santa Barbara, USA). For separation, a SEC Superdex 200 Increase 10/300 GL column from Cytiva (Marlborough, USA) was used. UV analysis was performed at a 280 nm wavelength. Prior to analysis, lyophilized samples were reconstituted with 910 μL of purified water. Samples were then centrifuged at 10,000 rpm for 10 min using a 5425 centrifuge from Eppendorf (Hamburg, Germany). For SEC analysis, 10 μL was injected. Mobile phase was a 50 mM phosphate buffer at pH 7 with a flow rate of 1 mL/min. Astra V.7.3.2 from Wyatt Technology Corporation (Santa Barbara, USA) was used to determine the mass fractions of antibody monomer. From the antibody monomer content, antibody (mAb) monomer retention was calculated. 3 Nano Differential Scanning Fluorimetry Thermal stability was analyzed using nano differential scanning fluorimetry (nanoDSF) using the Prometheus NT.48 from NanoTemper (Munich, Germany). Reconstituted samples (see ) were put into capillaries and heated from 20 to 90 °C with a heating ramp of 2 K/min. The fluorescence ratio F350/F330 was used to determine the unfolding temperature T unfold and the aggregation temperature T agg . Modeling Using PC-SAFT The activity coefficients necessary for the calculation of the water activity are derived from the residual Helmholtz energy A res . A res is calculated with the PC-SAFT. , In this process, A res is composed of different fractions. A hard-chain accounts for hard-chain repulsions, A dispersion accounts for associative interactions, and A association accounts for hydrogen bonding. 4 For the calculation of A res , each molecule is described as a chain of m i seg segments, with each segment having a diameter σ i . In addition, the dispersion–energy parameter u i / k B , the association–energy parameter e AiBi / k B , and the association volume k AiBi are considered. These quantities are calculated using Berthelot–Lorentz mixing rules, introducing an adjustable binary interaction parameter k ij . 5 6 The binary interaction parameter k ij can be temperature-dependent with a constant value k ij ,0K at 0 K and a temperature slope k ij , T . 7 For the calculation of the association energy and volume, mixing rules of Wolbach and Sandler were used. 8 9 The PC-SAFT pure component parameters for the calculation of a w for the data from Haeuser et al. containing cyclodextrins, recombinant human albumin, and polyvinylpyrrolidone in addition to sucrose, ectoine, arginine, and proline are listed in . To model the influence of recombinant human albumin, the pure component parameters and interaction parameters of bovine serum albumin were used due to its comparable structure. For the calculation of the water activity, binary interaction parameters were used and are listed in . Results and Discussion 4.1 Applicability of Water Activity as a Stability Criterion in Lyophilized Formulations In order to evaluate the applicability of water activity as stability criterion in lyophilized formulations, we first investigated a possible correlation using available literature data. Stability data was selected from mAb formulations that had different excipient compositions with largely uniform residual moisture. Water activity of the respective formulations was calculated using PC-SAFT, as described in . The results are listed in . Detailed compositions of respective formulations are given in the appendix. Considering monomer retention as a function of water activity delivers a clear correlation (see ) with respect to (long-term) formulation stability. Formulations with very low water activity values show a significant loss in monomer content (if stored at 40 °C for 90 days) of up to 16% (e.g., no. 2 in , a w = 0.000048). In a water activity range between a w = 0.000633 and a w = 0.0127, monomer retention of over 98% could be achieved after samples were stored for 90 days at 40 °C. The highest and almost complete monomer retention (>99.9%) was observed at water activities of 0.23 (formulation #23 in ) and 0.24 (formulation #24 in ). The results clearly suggest that based on the data analyzed, the water activity of a formulation should be higher than 0.025 if a monomer retention of >97% is desired and higher than 0.23 in order to allow for the best (long-term) stabilization of the antibodies (>99.9% monomer retention). It is crucial to avoid water activities below 0.000275, as (based on the formulations investigated) this will lead to significant monomer loss during long-term storage. It has to be mentioned that the values of 0.025 and 0.23 were defined based on the available data set. With other, more comprehensive stability studies, even slightly lower water activities might be tolerable. 4.1.1 Identifying the Optimal Water Activity Range With the lower boundary (minimal value) for water activity already available through previous investigations, further investigations were performed to identify a useful upper water activity boundary. The water activity in a lyophilizate after drying can typically reach values of up to 0.4. This is because constraining meaningful dry products to an accepted residual moisture in the formulation of maximally ca. 3 wt % excludes higher water activities. As illustrated in , from a general point of view, two main chemical degradation routes are typically taken into account for lyophilized formulations: oxidation kinetics can be reduced by increasing the water activity, with a minimum oxidation expected at a water activity of 0.4. However, above a critical water activity of 0.28, the Browning reaction rate constantly increases. As an optimal compromise between the two degradation reactions, the water activity should be close to the critical water activity of 0.28 in order to minimize the oxidation rate, prevent the browning reaction from taking place, and at the same time be as high as possible. This theoretical consideration overlaps well with the results from the experiments reported above in , defining the water activity window in lyophilized formulations to be in the region of 0.025 to 0.25 . 4.2 Initial Excipient Choice Based on a w and T g 4.2.1 Identification of Promising Excipient Combinations Based on γ w As one of the two determinants/levers for tuning a w to a specific value within the desired water activity range, investigations were performed to identify excipient combinations that offer a broad tunable range in γ w values depending on their composition. PC-SAFT calculations were performed, as described in . The results illustrated in show a promising, useful system (sucrose/ectoine, high range in γ w values, mostly above the value of the single component), a non-optimal system (sucrose/arginine, low range in γ w values, mostly below the value of the single components), and a “no effect” system with practically no differences in γ w values over the entire ratio range of mixtures (sucrose/proline). Systems such as sucrose/ectoine shown in are preferable, as the high range in γ w values simultaneously allows for a high range of residual moistures to be considered while still meeting the water activity range criterion. 4.2.2 Tuning T g through the Residual Mositure The second determinant/lever investigated for tuning a w to a specific value is the residual moisture x w , which is directly connected to T g of the formulation. Care has to be taken to ensure that T g of the formulation is far higher than the storage temperature (in order to ensure a glassy state/adherence to vitrification theory and avoid cake collapse). It is common knowledge that the glass-transition temperature of the lyophilizate decreases with an increase in residual moisture. That is because water has a glass-transition temperature of −137 °C. T g of the formulations was calculated using the Gordon–Taylor equation, as described in the Introduction. Gordon–Taylor constants were fitted to the glass-transition temperatures T g ’ of sucrose/water and ectoine/water, as stated in the literature. Fitting resulted in Gordon–Taylor constants of 0.311 for sucrose/water and 0.4 for ectoine/water. The effect of residual moisture on T g is depicted in , where an increase of 1.5 w % residual moisture lowers T g for about 9 K for all sucrose/ectoine excipient compositions. The Gorden–Teller equation thus allows us to calculate the (critical) residual moisture for all binary compositions of sucrose/ectoine at which the formulations/lyophilizates T g is still above the critical value of 40 °C. This temperature was selected in order to avoid collapse of the lyophilizate during storage at T storage = 25 °C and to ensure “zero mobility” at 2–8 °C as the regular storage temperature. For pure sucrose, the threshold T g of 40 °C is reached at a residual moisture content of 3.4 w %. For pure ectoine, the threshold is already reached at 1.6 wt % residual moisture. 4.2.3 Combining a w and T g Finally, water activity and T g were combined/simultaneously optimized. The value for the (critical) residual moisture, as calculated in was used and combined with PC-SAFT calculations for the activity coefficient for all (binary) formulation compositions. The resulting water activity marks the highest tolerable water activity for the respective excipient compositions, which still fulfills the T g = 40 °C requirement (blue curve in ). For the given case, all values on this curve lie below the upper water activity boundary of 0.25, meaning that for this particular system/formulation, water activity values above the curve but below 0.25 (gray area in ) would meet the water activity criterion but fail the T g criterion. The lower boundary for water activity remains at 0.025 (as taken from ). The green area in thus marks the applicable formulation window that fulfills both requirements. From an application perspective, it is therefore recommended to use the 0.33 wt % ectoine in the sucrose/ectoine mixture, giving the highest flexibility in the residual water content with tolerable values between 0.24 and 2.8 w % water. As the influence of temperature on the water activity in a formulation is small, it can be expected that predictions made at 40 °C in this regard are valid also at other storage temperatures (e.g., 25 °C and 5 °C) or vice versa. The calculation of the water activity for the temperatures 5, 25, and 40 °C is shown in the Supporting Information, and all show corresponding values. 4.3 Validation of Water Activity Correlation and Design Approach In order to validate the water activity correlation and design approach described in through 4.2.3, we defined several formulations for stability testing in the water activity range between 0.01 and 0.1. In addition, a formulation with high water activity outside the proposed range was prepared to validate the upper limit for water activity. Excipient combinations containing sucrose, arginine, ectoine, and proline were selected because these excipients are widely used in the pharmaceutical industry. The excipient compositions are listed in . All formulations were dried and lyophilized, as described in . Results on residual moisture, water activity, glass-transition temperature after drying, and the monomer retention after 9 months of storage at 40 °C are given in . The residual moisture ranged between 0.31 and 0.91 w % and the water activities between 0.01 and 0.09. The glass-transition temperatures of the formulations were all above 40 °C. Formulation 8 was prepared as a negative example with a high residual moisture of 4.07 w % and a resulting water activity of 0.309, a value beyond the proposed range of a stable formulation after lyophilization. The glass-transition temperature was 26.5 °C, approximately 13.5 K below the maximum storage temperature of 40 °C. Confirming the water activity calculations based on literature data performed in , a water activity of 0.01 leads to formulations with a monomer retention >97% after 90 days of storage at 40 °C. For water activities above 0.01, the monomer retention increased, and full retention (>99%) for formulations #1, #4, #5, #7, and #8 was found. Considering all data points shown in , the trend of high water activity leading to a better stabilization than low water activity is confirmed. All formulations showed little cake shrinkage after drying and cake detachment from the wall. Additionally, vial-neighboring effect was observed, resulting from intervial cooling during lyophilization, see upper pictures in . Samples from formulation #4 showed macroscopic collapse after annealing and subsequent drying, resulting from viscous flow during annealing, as this macroscopic collapse was not observed for formulation #3 not including the annealing step. The observed collapse in formulation #4 may have led to the glass-transition temperature being much lower than the glass-transition temperature of formulation #3, although having the same excipient composition. Collapse may have led to an irregular moisture distribution, when a high residual moisture spot was analyzed using DSC as the glass-transition temperature should probably be higher and closer to the T g of formulation #3. No collapse or browning was observed in formulations #1–7 over the storage period, neither at 25 °C nor at 40 °C. Formulation #8 (“negative example”) collapsed after one month of storage at 40 °C. After 9 months of storage, only a collapsed, highly viscous drop remained at the bottom of the vial; see the right pictures in . Simultaneously, the lyophilizate turned yellow, which can be attributed to browning. These two phenomena are due to the low glass-transition temperature of 23.5 °C and the water activity of over 0.28, which is the limiting water activity for a browning reaction to take place, respectively. Although complete retention of the antibody was possible at a water activity above 0.25, the recommended threshold should not be exceeded to ensure overall mechanically and chemically stable lyophilizates. Thus, high molecular mobility is not necessarily a compromising antibody stability as formulation #8 showed complete monomer retention, suggesting that stabilization by beneficial molecular interactions is the decisive factor in this case. All formulations remained amorphous over the storage time. In addition, no change in the unfolding temperature of the antibody was observed during storage. The detailed results can be found in the appendix. Applicability of Water Activity as a Stability Criterion in Lyophilized Formulations In order to evaluate the applicability of water activity as stability criterion in lyophilized formulations, we first investigated a possible correlation using available literature data. Stability data was selected from mAb formulations that had different excipient compositions with largely uniform residual moisture. Water activity of the respective formulations was calculated using PC-SAFT, as described in . The results are listed in . Detailed compositions of respective formulations are given in the appendix. Considering monomer retention as a function of water activity delivers a clear correlation (see ) with respect to (long-term) formulation stability. Formulations with very low water activity values show a significant loss in monomer content (if stored at 40 °C for 90 days) of up to 16% (e.g., no. 2 in , a w = 0.000048). In a water activity range between a w = 0.000633 and a w = 0.0127, monomer retention of over 98% could be achieved after samples were stored for 90 days at 40 °C. The highest and almost complete monomer retention (>99.9%) was observed at water activities of 0.23 (formulation #23 in ) and 0.24 (formulation #24 in ). The results clearly suggest that based on the data analyzed, the water activity of a formulation should be higher than 0.025 if a monomer retention of >97% is desired and higher than 0.23 in order to allow for the best (long-term) stabilization of the antibodies (>99.9% monomer retention). It is crucial to avoid water activities below 0.000275, as (based on the formulations investigated) this will lead to significant monomer loss during long-term storage. It has to be mentioned that the values of 0.025 and 0.23 were defined based on the available data set. With other, more comprehensive stability studies, even slightly lower water activities might be tolerable. 4.1.1 Identifying the Optimal Water Activity Range With the lower boundary (minimal value) for water activity already available through previous investigations, further investigations were performed to identify a useful upper water activity boundary. The water activity in a lyophilizate after drying can typically reach values of up to 0.4. This is because constraining meaningful dry products to an accepted residual moisture in the formulation of maximally ca. 3 wt % excludes higher water activities. As illustrated in , from a general point of view, two main chemical degradation routes are typically taken into account for lyophilized formulations: oxidation kinetics can be reduced by increasing the water activity, with a minimum oxidation expected at a water activity of 0.4. However, above a critical water activity of 0.28, the Browning reaction rate constantly increases. As an optimal compromise between the two degradation reactions, the water activity should be close to the critical water activity of 0.28 in order to minimize the oxidation rate, prevent the browning reaction from taking place, and at the same time be as high as possible. This theoretical consideration overlaps well with the results from the experiments reported above in , defining the water activity window in lyophilized formulations to be in the region of 0.025 to 0.25 . Identifying the Optimal Water Activity Range With the lower boundary (minimal value) for water activity already available through previous investigations, further investigations were performed to identify a useful upper water activity boundary. The water activity in a lyophilizate after drying can typically reach values of up to 0.4. This is because constraining meaningful dry products to an accepted residual moisture in the formulation of maximally ca. 3 wt % excludes higher water activities. As illustrated in , from a general point of view, two main chemical degradation routes are typically taken into account for lyophilized formulations: oxidation kinetics can be reduced by increasing the water activity, with a minimum oxidation expected at a water activity of 0.4. However, above a critical water activity of 0.28, the Browning reaction rate constantly increases. As an optimal compromise between the two degradation reactions, the water activity should be close to the critical water activity of 0.28 in order to minimize the oxidation rate, prevent the browning reaction from taking place, and at the same time be as high as possible. This theoretical consideration overlaps well with the results from the experiments reported above in , defining the water activity window in lyophilized formulations to be in the region of 0.025 to 0.25 . Initial Excipient Choice Based on a w and T g 4.2.1 Identification of Promising Excipient Combinations Based on γ w As one of the two determinants/levers for tuning a w to a specific value within the desired water activity range, investigations were performed to identify excipient combinations that offer a broad tunable range in γ w values depending on their composition. PC-SAFT calculations were performed, as described in . The results illustrated in show a promising, useful system (sucrose/ectoine, high range in γ w values, mostly above the value of the single component), a non-optimal system (sucrose/arginine, low range in γ w values, mostly below the value of the single components), and a “no effect” system with practically no differences in γ w values over the entire ratio range of mixtures (sucrose/proline). Systems such as sucrose/ectoine shown in are preferable, as the high range in γ w values simultaneously allows for a high range of residual moistures to be considered while still meeting the water activity range criterion. 4.2.2 Tuning T g through the Residual Mositure The second determinant/lever investigated for tuning a w to a specific value is the residual moisture x w , which is directly connected to T g of the formulation. Care has to be taken to ensure that T g of the formulation is far higher than the storage temperature (in order to ensure a glassy state/adherence to vitrification theory and avoid cake collapse). It is common knowledge that the glass-transition temperature of the lyophilizate decreases with an increase in residual moisture. That is because water has a glass-transition temperature of −137 °C. T g of the formulations was calculated using the Gordon–Taylor equation, as described in the Introduction. Gordon–Taylor constants were fitted to the glass-transition temperatures T g ’ of sucrose/water and ectoine/water, as stated in the literature. Fitting resulted in Gordon–Taylor constants of 0.311 for sucrose/water and 0.4 for ectoine/water. The effect of residual moisture on T g is depicted in , where an increase of 1.5 w % residual moisture lowers T g for about 9 K for all sucrose/ectoine excipient compositions. The Gorden–Teller equation thus allows us to calculate the (critical) residual moisture for all binary compositions of sucrose/ectoine at which the formulations/lyophilizates T g is still above the critical value of 40 °C. This temperature was selected in order to avoid collapse of the lyophilizate during storage at T storage = 25 °C and to ensure “zero mobility” at 2–8 °C as the regular storage temperature. For pure sucrose, the threshold T g of 40 °C is reached at a residual moisture content of 3.4 w %. For pure ectoine, the threshold is already reached at 1.6 wt % residual moisture. 4.2.3 Combining a w and T g Finally, water activity and T g were combined/simultaneously optimized. The value for the (critical) residual moisture, as calculated in was used and combined with PC-SAFT calculations for the activity coefficient for all (binary) formulation compositions. The resulting water activity marks the highest tolerable water activity for the respective excipient compositions, which still fulfills the T g = 40 °C requirement (blue curve in ). For the given case, all values on this curve lie below the upper water activity boundary of 0.25, meaning that for this particular system/formulation, water activity values above the curve but below 0.25 (gray area in ) would meet the water activity criterion but fail the T g criterion. The lower boundary for water activity remains at 0.025 (as taken from ). The green area in thus marks the applicable formulation window that fulfills both requirements. From an application perspective, it is therefore recommended to use the 0.33 wt % ectoine in the sucrose/ectoine mixture, giving the highest flexibility in the residual water content with tolerable values between 0.24 and 2.8 w % water. As the influence of temperature on the water activity in a formulation is small, it can be expected that predictions made at 40 °C in this regard are valid also at other storage temperatures (e.g., 25 °C and 5 °C) or vice versa. The calculation of the water activity for the temperatures 5, 25, and 40 °C is shown in the Supporting Information, and all show corresponding values. Identification of Promising Excipient Combinations Based on γ w As one of the two determinants/levers for tuning a w to a specific value within the desired water activity range, investigations were performed to identify excipient combinations that offer a broad tunable range in γ w values depending on their composition. PC-SAFT calculations were performed, as described in . The results illustrated in show a promising, useful system (sucrose/ectoine, high range in γ w values, mostly above the value of the single component), a non-optimal system (sucrose/arginine, low range in γ w values, mostly below the value of the single components), and a “no effect” system with practically no differences in γ w values over the entire ratio range of mixtures (sucrose/proline). Systems such as sucrose/ectoine shown in are preferable, as the high range in γ w values simultaneously allows for a high range of residual moistures to be considered while still meeting the water activity range criterion. Tuning T g through the Residual Mositure The second determinant/lever investigated for tuning a w to a specific value is the residual moisture x w , which is directly connected to T g of the formulation. Care has to be taken to ensure that T g of the formulation is far higher than the storage temperature (in order to ensure a glassy state/adherence to vitrification theory and avoid cake collapse). It is common knowledge that the glass-transition temperature of the lyophilizate decreases with an increase in residual moisture. That is because water has a glass-transition temperature of −137 °C. T g of the formulations was calculated using the Gordon–Taylor equation, as described in the Introduction. Gordon–Taylor constants were fitted to the glass-transition temperatures T g ’ of sucrose/water and ectoine/water, as stated in the literature. Fitting resulted in Gordon–Taylor constants of 0.311 for sucrose/water and 0.4 for ectoine/water. The effect of residual moisture on T g is depicted in , where an increase of 1.5 w % residual moisture lowers T g for about 9 K for all sucrose/ectoine excipient compositions. The Gorden–Teller equation thus allows us to calculate the (critical) residual moisture for all binary compositions of sucrose/ectoine at which the formulations/lyophilizates T g is still above the critical value of 40 °C. This temperature was selected in order to avoid collapse of the lyophilizate during storage at T storage = 25 °C and to ensure “zero mobility” at 2–8 °C as the regular storage temperature. For pure sucrose, the threshold T g of 40 °C is reached at a residual moisture content of 3.4 w %. For pure ectoine, the threshold is already reached at 1.6 wt % residual moisture. Combining a w and T g Finally, water activity and T g were combined/simultaneously optimized. The value for the (critical) residual moisture, as calculated in was used and combined with PC-SAFT calculations for the activity coefficient for all (binary) formulation compositions. The resulting water activity marks the highest tolerable water activity for the respective excipient compositions, which still fulfills the T g = 40 °C requirement (blue curve in ). For the given case, all values on this curve lie below the upper water activity boundary of 0.25, meaning that for this particular system/formulation, water activity values above the curve but below 0.25 (gray area in ) would meet the water activity criterion but fail the T g criterion. The lower boundary for water activity remains at 0.025 (as taken from ). The green area in thus marks the applicable formulation window that fulfills both requirements. From an application perspective, it is therefore recommended to use the 0.33 wt % ectoine in the sucrose/ectoine mixture, giving the highest flexibility in the residual water content with tolerable values between 0.24 and 2.8 w % water. As the influence of temperature on the water activity in a formulation is small, it can be expected that predictions made at 40 °C in this regard are valid also at other storage temperatures (e.g., 25 °C and 5 °C) or vice versa. The calculation of the water activity for the temperatures 5, 25, and 40 °C is shown in the Supporting Information, and all show corresponding values. Validation of Water Activity Correlation and Design Approach In order to validate the water activity correlation and design approach described in through 4.2.3, we defined several formulations for stability testing in the water activity range between 0.01 and 0.1. In addition, a formulation with high water activity outside the proposed range was prepared to validate the upper limit for water activity. Excipient combinations containing sucrose, arginine, ectoine, and proline were selected because these excipients are widely used in the pharmaceutical industry. The excipient compositions are listed in . All formulations were dried and lyophilized, as described in . Results on residual moisture, water activity, glass-transition temperature after drying, and the monomer retention after 9 months of storage at 40 °C are given in . The residual moisture ranged between 0.31 and 0.91 w % and the water activities between 0.01 and 0.09. The glass-transition temperatures of the formulations were all above 40 °C. Formulation 8 was prepared as a negative example with a high residual moisture of 4.07 w % and a resulting water activity of 0.309, a value beyond the proposed range of a stable formulation after lyophilization. The glass-transition temperature was 26.5 °C, approximately 13.5 K below the maximum storage temperature of 40 °C. Confirming the water activity calculations based on literature data performed in , a water activity of 0.01 leads to formulations with a monomer retention >97% after 90 days of storage at 40 °C. For water activities above 0.01, the monomer retention increased, and full retention (>99%) for formulations #1, #4, #5, #7, and #8 was found. Considering all data points shown in , the trend of high water activity leading to a better stabilization than low water activity is confirmed. All formulations showed little cake shrinkage after drying and cake detachment from the wall. Additionally, vial-neighboring effect was observed, resulting from intervial cooling during lyophilization, see upper pictures in . Samples from formulation #4 showed macroscopic collapse after annealing and subsequent drying, resulting from viscous flow during annealing, as this macroscopic collapse was not observed for formulation #3 not including the annealing step. The observed collapse in formulation #4 may have led to the glass-transition temperature being much lower than the glass-transition temperature of formulation #3, although having the same excipient composition. Collapse may have led to an irregular moisture distribution, when a high residual moisture spot was analyzed using DSC as the glass-transition temperature should probably be higher and closer to the T g of formulation #3. No collapse or browning was observed in formulations #1–7 over the storage period, neither at 25 °C nor at 40 °C. Formulation #8 (“negative example”) collapsed after one month of storage at 40 °C. After 9 months of storage, only a collapsed, highly viscous drop remained at the bottom of the vial; see the right pictures in . Simultaneously, the lyophilizate turned yellow, which can be attributed to browning. These two phenomena are due to the low glass-transition temperature of 23.5 °C and the water activity of over 0.28, which is the limiting water activity for a browning reaction to take place, respectively. Although complete retention of the antibody was possible at a water activity above 0.25, the recommended threshold should not be exceeded to ensure overall mechanically and chemically stable lyophilizates. Thus, high molecular mobility is not necessarily a compromising antibody stability as formulation #8 showed complete monomer retention, suggesting that stabilization by beneficial molecular interactions is the decisive factor in this case. All formulations remained amorphous over the storage time. In addition, no change in the unfolding temperature of the antibody was observed during storage. The detailed results can be found in the appendix. Conclusions Within this work, we have demonstrated that the water activity can serve as a reliable indicator of antibody stability. Experimental data showed a clear correlation with monomer retention after accelerated stability testing (9 months, 40 °C). Traditional approaches typically consider only the excipient composition or residual moisture for formulation development. This work highlights that the impacts/effects of excipient choice and residual moisture can be, and should be, assessed, as it is the case using water activity. This work further indicates that no universally optimal level of residual moisture exists. The optimal level is dependent on the specific excipients used and their interactions with water. We developed an innovative design approach that facilitates the identification of promising excipients and excipient mixtures to be used for (lyophilized) formulations based on water activity calculations. By applying this approach, we were able to predict formulation compositions and formulation conditions that exhibit enhanced (long-term) stability with minimal to no experiments. Predicted formulations were validated using accelerated stability studies (9 months, 40 °C), which confirmed the validity and reliability of our design approach and, thus, the applicability of water activity as a design parameter. This work signifies an advancement in formulation development, providing a pathway to move from traditional trial-and-error methods to a more strategic and modeling-based development process. This approach also improves the likelihood of identifying stable and effective antibody formulations in the early development stages and increases the speed to find them. |
COVID-19 and the Case for Medical Management and Primary Care | 50eb58e1-0658-4545-b18d-5ccaf525ca9b | 7786408 | Preventive Medicine[mh] | This pandemic has laid bare horrific cracks and chasms in our fragmented healthcare system. For years, U.S. Healthcare has plodded along in a predominately piecemeal, for-profit fashion, yielding a system with pervasive dysfunction characterized by high cost, and poor outcomes. Indeed U.S. healthcare is now a 3.6 trillion dollar industry constituting almost a fifth of our Gross Domestic Product. Those 3.6 trillion dollars are made off the backs of doctors and patients, in 15 and 30 minutes appointment slots, with time only to deal with the “most important” 1-2 issues, tabling the rest for the next truncated visit. American adults, 50% of whom admit to skipping medications, and 40% of whom are obese, bear the burden for most of America’s healthcare woes. The rise of lifestyle comorbidities, and diseases of despair keep premiums high, hospitals full, and the pockets of Big PhRMA lined. COVID-19 has held the mirror up to our broken system. It has revealed an over reliance on risky surgeries, persistent gross health disparities, and profiteering at the expense of prevention, wrapped in the illusion of humanism. From this scourge we have an opportunity to materialize the exceptionalism our nation deserves, by creating a system that supports health, and reduces harm for patients and populations. The Over-Reliance on Elective Surgeries Hospitals and health systems rely on elective surgeries to stay profitable. The suspension of elective surgery has resulted in furloughs and layoffs across the industry. The American Hospital Association estimates a $200 billion loss, nationwide, from March through June of 2020. In April alone, 1.4 million healthcare jobs disappeared. Record losses mounted as demand for emergency room, hospital, and intensive care beds skyrocketed. It is confounding that you could fill a hospital with sick COVID-19 patients and lose money. It is confounding until you grasp that medical management and “real value” quality metrics, are woefully undervalued. On average, only 29% of hospital admissions are surgical, but surgeries compose 48% of hospital revenue. The overvaluing of procedures has two significant downstream effects. First, because procedures pay more, doctors and systems are more apt to do them, regardless of whether people need it. An American College of Cardiology funded study estimates that 12% of all cardiac stents were flat out unnecessary, and 30% had unclear need. Similarly, studies have estimated that over 17% of back surgeries are unwarranted. Indeed, leaders in orthopedics from Stanford to Dartmouth have cited the role of financial incentives in driving unnecessary surgeries. , By hook or by crook, revenue producing procedures get done despite long term costs in real dollars, and worse, risk to patient safety. Second, is the undervaluing of outpatient medical management. To date, among the most common causes for both hospitalization and death in America revolve around heart and lung disease. , Decades of research indicates that lifestyle changes (diet, exercise, smoking cessation) and medical management (pills and inhalers), prevent hospitalizations and surgeries for these diseases, while prolonging life. , But the work to manage these conditions is under-valued and under-paid. So chronic conditions persist and thrive, while we debate length-of-stay and re-admission rates. The Fix: Medical-Surgical Near-Parity If we value the management of disease, and the prevention of risky surgery; the reimbursement for medical management must be brought closer in line with the reimbursement for surgical management. While there is value in replacing a hip, there should also be value in preventing the need to have that hip replaced in the first place. The overhead of running an operating room must be considered. But the reimbursement gap between medical and surgical management drives risk and cost through an over-reliance on unnecessary surgeries, at the expense of the medical management and prevention. If chronic disease is the scourge of the 21st century, then we should prioritize its prevention and treatment. Dramatic Healthcare Disparities The coronavirus has brought a near century-old problem into stark relief; many cannot afford or access healthcare. This burden has fallen disproportionately on minority communities. In Louisiana, blacks make up only 32% of the population but comprise 70% of COVID-19 deaths. In Michigan, blacks account for 14% of the population, but 41% of fatalities. In San Francisco, latinos make up 35% of the population but 80% of cases, while in Virginia, latinos compose 49% of cases while only 10% of the population. Why are communities of color dying at such higher rates? The answer is multifactorial, but there are at least two clear reasons. First, we know that people with certain pre-existing conditions like hypertension, diabetes, and obesity are more vulnerable to infections like COVID-19. Unfortunately, these chronic conditions are more prevalent in communities of color. For example, diabetes affects 12.6% of blacks, 11.8% of hispanics, but only 7.1% of whites. Likewise, hypertension affects 43.5% of blacks, 33% of hispanics, but only 27.5% of whites. Second, communities of color disproportionately lack access to healthy living and healthcare. If you do not have access to healthy food, safe places to exercise, and a job with health insurance; hypertension and diabetes become statistical inevitabilities. While redressing the Social Determinants of Health remains paramount, these comorbidities can be mitigated by increasing access to primary care. Hospitals and health systems rely on elective surgeries to stay profitable. The suspension of elective surgery has resulted in furloughs and layoffs across the industry. The American Hospital Association estimates a $200 billion loss, nationwide, from March through June of 2020. In April alone, 1.4 million healthcare jobs disappeared. Record losses mounted as demand for emergency room, hospital, and intensive care beds skyrocketed. It is confounding that you could fill a hospital with sick COVID-19 patients and lose money. It is confounding until you grasp that medical management and “real value” quality metrics, are woefully undervalued. On average, only 29% of hospital admissions are surgical, but surgeries compose 48% of hospital revenue. The overvaluing of procedures has two significant downstream effects. First, because procedures pay more, doctors and systems are more apt to do them, regardless of whether people need it. An American College of Cardiology funded study estimates that 12% of all cardiac stents were flat out unnecessary, and 30% had unclear need. Similarly, studies have estimated that over 17% of back surgeries are unwarranted. Indeed, leaders in orthopedics from Stanford to Dartmouth have cited the role of financial incentives in driving unnecessary surgeries. , By hook or by crook, revenue producing procedures get done despite long term costs in real dollars, and worse, risk to patient safety. Second, is the undervaluing of outpatient medical management. To date, among the most common causes for both hospitalization and death in America revolve around heart and lung disease. , Decades of research indicates that lifestyle changes (diet, exercise, smoking cessation) and medical management (pills and inhalers), prevent hospitalizations and surgeries for these diseases, while prolonging life. , But the work to manage these conditions is under-valued and under-paid. So chronic conditions persist and thrive, while we debate length-of-stay and re-admission rates. If we value the management of disease, and the prevention of risky surgery; the reimbursement for medical management must be brought closer in line with the reimbursement for surgical management. While there is value in replacing a hip, there should also be value in preventing the need to have that hip replaced in the first place. The overhead of running an operating room must be considered. But the reimbursement gap between medical and surgical management drives risk and cost through an over-reliance on unnecessary surgeries, at the expense of the medical management and prevention. If chronic disease is the scourge of the 21st century, then we should prioritize its prevention and treatment. The coronavirus has brought a near century-old problem into stark relief; many cannot afford or access healthcare. This burden has fallen disproportionately on minority communities. In Louisiana, blacks make up only 32% of the population but comprise 70% of COVID-19 deaths. In Michigan, blacks account for 14% of the population, but 41% of fatalities. In San Francisco, latinos make up 35% of the population but 80% of cases, while in Virginia, latinos compose 49% of cases while only 10% of the population. Why are communities of color dying at such higher rates? The answer is multifactorial, but there are at least two clear reasons. First, we know that people with certain pre-existing conditions like hypertension, diabetes, and obesity are more vulnerable to infections like COVID-19. Unfortunately, these chronic conditions are more prevalent in communities of color. For example, diabetes affects 12.6% of blacks, 11.8% of hispanics, but only 7.1% of whites. Likewise, hypertension affects 43.5% of blacks, 33% of hispanics, but only 27.5% of whites. Second, communities of color disproportionately lack access to healthy living and healthcare. If you do not have access to healthy food, safe places to exercise, and a job with health insurance; hypertension and diabetes become statistical inevitabilities. While redressing the Social Determinants of Health remains paramount, these comorbidities can be mitigated by increasing access to primary care. Telehealth has been shown to improve access and outcomes, from sub-specialty support to mental health. Until this pandemic, these advances were consigned to rural communities by regulation. The advent of COVID-19 reiterated the need to expand telehealth to all communities, including the urban, suburban, and especially those of color. Accordingly, the Center for Medicare and Medicaid Services changed their regulations, expanding access, granting reimbursement parity for tele-visits, and allowing physicians to treat across state lines. This regulatory pivot was fundamental in telehealth’s success, portending improved access for mental health and chronic disease management. However, telehealth is not a panacea. Physicians in San Francisco noted a digital divide—a cohort of patients who lacked the infrastructure or technological know-how to “log on.” Any continuation of telehealth will rely on payers acknowledging parity between office and tele-visits. And there are some things, such as a physical exam or point-of-care testing, that require a physical office. Our fractured, fee-for-service system was ill equipped to make such a fast and vital pivot, resulting in physician office closures, exacerbating wait times and further limiting options. The Fix: Primary Care Studies have consistently shown that resource investment into primary care improves health, and saves money. , Despite these overwhelming benefits, primary care is not valued in the U.S. While the developed world spends 14% of healthcare dollars on primary care, we spend less than half as much (5.8%-7%). , You can guess the results. Twenty-eight percent of Americans suffer from multiple chronic conditions, versus only 17.5% in other developed countries. Forty percent of Americans are obese, compared to 21% in other developed countries. And the rates of hospitalizations for preventable conditions such as hypertension and diabetes are 33% higher in the US, compared to other developed countries. So why do we not have better primary care? Because we do not value it. When doctors could spend the same time training, but get paid 50% more doing another specialty, they do not line up for primary care. When we do not allocate money for the psychologists, nutritionists, and social workers critical to its success, health systems do not invest in primary care. When we pay for injections, instead of paying doctors to spend time talking with patients, prevention and management become afterthoughts. Indeed, the few physicians trying to treat patients holistically, are being forced to see more patients, with fewer resources, consequently suffering an epidemic of depression, anxiety, and burnout. The reallocation of funds may be brutal for some, but the math is simple. We can either pay for prevention and management, or we can pay for expensive procedures to Band-Aid the complications of preventable end-stage disease. We can invest in population health, improving access and equity, or we can pay for the sick, communities of color, rural communities, and the poor to suffer and die at disproportionate, yet preventable rates. Even those that may be content with the status quo, must realize that higher premiums and tax dollars are going to preventable hospital visits instead of improving schools, roads, and parks. Like other developed nations, we must double investment in primary care. For recruitment and retention, we must pay primary care doctors more. We must pay them to spend time with their patients, to talk about healthy living, prevention, and disease management. We must pay them more to work in poor and underserved communities, not less. We must pay for the psychologists, nutritionists and case workers who aid in the treatment of lifestyle comorbidities, diseases of despair, and Social Determinants of Health. We can debate cost, but we must not forget, the product itself is defective. And the real fix starts with primary care. It will save lives, and money. Conclusion America herself has a chronic condition—the inability to invest now, for what will save us later. The collapse of masks, gowns and gloves was the hallmark of our hubris. Rather than invest in public health, supply chain resiliency, and disaster preparedness, our margin-obsessed system folded within days after the rumor of pandemic. Politically popular debates on cost only capture half the problem. “Medicare for All,” or private payer competition, will only insure that everyone has access to the same pile of problems: a fundamentally defective product. Healthcare is a 3.6 trillion dollar economy, that makes up a fifth of our Gross Domestic Product and employs between 1 and 2 out of every 10 people. We are kidding ourselves if we think such a leviathan will go gently into that good night. The reallocation of funds will raise the ire of those who benefit from such an inefficient and profligate system. However, reorienting our system towards outpatient medical management, primary care, and prevention, will put us on the path to improve health, equity, and control costs. COVID-19 continues to spotlight our broken system, as it rampages through states with higher concentrations of comorbidities, resulting in dwindling ventilators and intensive care beds. These failures portray, embarrassing and absurd, huge pockets of preventable suffering, death, and run-away cost. If we want to survive without furloughs and firings, to save our poor, our old, and our vulnerable, the answer is obvious. Invest in primary care. Pay for doctors to spend time with patients and to do the cognitive work to keep them healthy. Pay for the therapists, nutritionists, and case workers who bridge the gaps in our broken system. Incentivize prevention and medical management, rather than expensive, preventable, and often unnecessary surgeries. Anything short of that is all hat, and no cattle. Studies have consistently shown that resource investment into primary care improves health, and saves money. , Despite these overwhelming benefits, primary care is not valued in the U.S. While the developed world spends 14% of healthcare dollars on primary care, we spend less than half as much (5.8%-7%). , You can guess the results. Twenty-eight percent of Americans suffer from multiple chronic conditions, versus only 17.5% in other developed countries. Forty percent of Americans are obese, compared to 21% in other developed countries. And the rates of hospitalizations for preventable conditions such as hypertension and diabetes are 33% higher in the US, compared to other developed countries. So why do we not have better primary care? Because we do not value it. When doctors could spend the same time training, but get paid 50% more doing another specialty, they do not line up for primary care. When we do not allocate money for the psychologists, nutritionists, and social workers critical to its success, health systems do not invest in primary care. When we pay for injections, instead of paying doctors to spend time talking with patients, prevention and management become afterthoughts. Indeed, the few physicians trying to treat patients holistically, are being forced to see more patients, with fewer resources, consequently suffering an epidemic of depression, anxiety, and burnout. The reallocation of funds may be brutal for some, but the math is simple. We can either pay for prevention and management, or we can pay for expensive procedures to Band-Aid the complications of preventable end-stage disease. We can invest in population health, improving access and equity, or we can pay for the sick, communities of color, rural communities, and the poor to suffer and die at disproportionate, yet preventable rates. Even those that may be content with the status quo, must realize that higher premiums and tax dollars are going to preventable hospital visits instead of improving schools, roads, and parks. Like other developed nations, we must double investment in primary care. For recruitment and retention, we must pay primary care doctors more. We must pay them to spend time with their patients, to talk about healthy living, prevention, and disease management. We must pay them more to work in poor and underserved communities, not less. We must pay for the psychologists, nutritionists and case workers who aid in the treatment of lifestyle comorbidities, diseases of despair, and Social Determinants of Health. We can debate cost, but we must not forget, the product itself is defective. And the real fix starts with primary care. It will save lives, and money. America herself has a chronic condition—the inability to invest now, for what will save us later. The collapse of masks, gowns and gloves was the hallmark of our hubris. Rather than invest in public health, supply chain resiliency, and disaster preparedness, our margin-obsessed system folded within days after the rumor of pandemic. Politically popular debates on cost only capture half the problem. “Medicare for All,” or private payer competition, will only insure that everyone has access to the same pile of problems: a fundamentally defective product. Healthcare is a 3.6 trillion dollar economy, that makes up a fifth of our Gross Domestic Product and employs between 1 and 2 out of every 10 people. We are kidding ourselves if we think such a leviathan will go gently into that good night. The reallocation of funds will raise the ire of those who benefit from such an inefficient and profligate system. However, reorienting our system towards outpatient medical management, primary care, and prevention, will put us on the path to improve health, equity, and control costs. COVID-19 continues to spotlight our broken system, as it rampages through states with higher concentrations of comorbidities, resulting in dwindling ventilators and intensive care beds. These failures portray, embarrassing and absurd, huge pockets of preventable suffering, death, and run-away cost. If we want to survive without furloughs and firings, to save our poor, our old, and our vulnerable, the answer is obvious. Invest in primary care. Pay for doctors to spend time with patients and to do the cognitive work to keep them healthy. Pay for the therapists, nutritionists, and case workers who bridge the gaps in our broken system. Incentivize prevention and medical management, rather than expensive, preventable, and often unnecessary surgeries. Anything short of that is all hat, and no cattle. |
Secondary Endpoint Utilization and Publication Rate among Phase III Oncology Trials | 7c4c2451-60db-4853-b395-6d85ff569c99 | 11333994 | Internal Medicine[mh] | Secondary endpoints (SEP) are trial outcome measures that address important complementary questions to the primary endpoint (PEP); these SEPs may be used to assess treatment efficacy, patient symptoms, correlative translational analyses, and more . In oncology trials, SEPs—particularly translational correlatives—often provide rich, valuable information critical to the interpretation of the trial and the PEP, and may lead to the development of new trials and research directions . Whereas there has been much focus on the selection, validity, and transparency of PEPs in oncology trials, relatively less attention has been given to SEPs . The nature and number of SEPs have an impact on the research burden placed on clinical research infrastructure and especially on patients, who are often asked to donate their time and specimens to advance medical knowledge. Despite the direct impact of SEPs on patients and the overall trial interpretation, the scope and reporting of SEPs across oncology are poorly understood. Selective nonreporting and underpublication of PEPs have been shown to be particularly problematic in oncology trials . Previous studies have shown high variability in thoroughness and compliance with mandatory reporting requirements through trial registries . Transparency in reporting of endpoints is further complicated by the fact that study protocols and their amendments are often unpublished, inaccessible, incomplete, or redacted . Thus, we sought to investigate trends in the frequency, characteristics, and reporting of SEPs in late-phase oncology trials. We screened ClinicalTrials.gov from inception through February 2020 for phase III cancer-specific interventional randomized controlled trials, as previously described . Trials were included if the study (i) had published an article with its PEP results through 2020, (ii) had an available protocol, and (iii) contained at least one SEP . We found published articles via both ClinicalTrials.gov and PubMed searches using National Clinical Trial numbers and, if necessary, key words related to the study. Institutional review board approval was waived because of the public availability of data. This study complied with STROBE guidelines . For each included trial, we manually collected SEPs from ClinicalTrials.gov , all available protocol versions, and published articles. The availability and completeness of protocols were also manually validated. SEPs were defined narrowly and only used if labeled specifically as SEPs, outcome measures, or variables, depending on the trial’s preferred language. By contrast, secondary objectives and tertiary or exploratory endpoints were not independently considered SEPs. Moreover, SEPs that were removed in later protocol amendments were not included for the purposes of this study. We reviewed all available published articles to track data for each SEP, with trial publications queried between June and October 2023. We only recorded a SEP as published once it had reached maturity. If a SEP was discussed but no data were listed or available, it was not considered as having been published. We also considered data reported under the “Results” section for a given trial on ClinicalTrials.gov . SEPs were classified into categories. Disease-related outcomes (DRO) encompassed all tumor- and survival-related outcomes. Patient-reported outcomes (PRO) were derived from patients’ answers to questionnaires that typically assessed aspects of their quality of life. Toxicity endpoints covered provider-evaluated adverse events. Translational correlatives included all biomarker, imaging, and biological sample analyses. Pharmacokinetic endpoints evaluated drug metabolism and kinetics. Economic endpoints measured medical resource usage and financial toxicities. We defined SEPs as having been published when their data were found in a peer-reviewed manuscript, inclusive of data published in supplementary materials with or without interpretation. SEPs with data that were not published, but uploaded in full on ClinicalTrials.gov were defined as reported but not published. To account for variability in publication rate for SEPs collected from different sources, we ran a sensitivity analysis restricting the evaluated SEPs to only those that were (i) listed on both ClinicalTrials.gov and the latest version of the protocol and (ii) from trials with multiple protocol versions. These SEPs had the highest fidelity and were the most consistently acknowledged endpoints associated with each trial. Continuous variables were summarized by median and IQR and categorical variables by frequency. Mann–Whitney U -tests were used to detect differences in the numbers of SEPs by trial sponsorship; if trials were sponsored by both industry and cooperative groups, they were grouped in both categories. Trial-level characteristics and the rate of SEP publication were first evaluated using ordinary-least squares regression. Subsequently, the SEP publication rate for each trial was dichotomized into optimal publication rate (>75%) and suboptimal publication rate (≤75%), which represented 49% and 51% of trials in the dataset, respectively. We then employed binary logistic regression to explore associations and calculate ORs. To account for the potential influence of confounding variables, we then adjusted these associations using multivariable binary logistic regression. Confounding variables were identified by mapping causal relationships on a directed acyclic graph using DAGitty (Supplementary Fig. S1; ref. ). All tests were two-sided, confidence intervals (CI) were reported at 95%, and α was set a priori at 0.05. Statistical analyses were performed using SPSS v24 (IBM) and SAS v9.4. Plots were created using Prism v10 (GraphPad). Data availability Research data are stored in an institutional repository and will be shared upon reasonable request to the corresponding author. Research data are stored in an institutional repository and will be shared upon reasonable request to the corresponding author. A total of 280 trials enrolling 244,576 patients with publication dates ranging from 2010 to 2023 met the inclusion criteria for this study . Whereas all included trials had an available trial protocol, 55% of studies (153/280) provided more than one protocol or a summary of amendments . There was a median follow-up time of 8 years per trial after the primary publication to the end of data capture (IQR: 6–10 years). Across the 280 trials examined, there were a total of 2,562 SEPs. A median of eight SEPs was found per trial (IQR: 5–12). Notably, seven trials had 25 or more SEPs, with the highest number observed being 48 SEPs in a single trial. Most of the SEPs (66%; 1,700/2,562) were documented in both ClinicalTrials.gov and the respective trial protocol. The remaining SEPs were recorded exclusively in one of three places: only on ClinicalTrials.gov , only in the protocol, or only in a publication, as detailed in Supplementary Table S1. Only 22% of trials (62/280) listed all their SEPs consistently across both ClinicalTrials.gov and the last available version of the protocol. The absolute number of SEPs per trial increased over time ( β = 0.36; P < 0.0001; ). The number of SEPs was associated with trial sponsorship, with an increased median number of SEPs per trial for industry-sponsored studies versus nonindustry-sponsored studies (median 9 vs. 5 SEPs per study; P < 0.0001). Overall, 69% of SEPs (1,770/2,562) were ever published. Half of the SEPs (50%, 1,268/2,562) were published in the main text of the primary article. The remaining published SEPs were distributed among the supplement of the primary article (7%, 183/2,562), the main text of a secondary publication (12%, 300/2,562), and the supplement of a secondary publication (1%, 19/2,562; ). Secondary articles with SEP results were published a median of 2.5 years after the primary publication (IQR: 1.5–4; ). Half of all trials (144/280) published more than 75% of their SEPs. The publication rate significantly varied by SEP category [ X 2 (5, N = 2,562) = 245.86; P < 0.001]. DROs and toxicity endpoints were published at the highest rates of 75% (1,137/1,514) and 78% (309/396), respectively, whereas pharmacokinetics and economic measures were the lowest at 24% (37/155) and 13% (2/16; ), respectively. Sixty-three percent of all PROs were published; of the 169 trials with at least one PRO endpoint, 52% (88/169) published all their PROs, and 28% (48/169) published none of them (Supplementary Table S2). Translational correlatives were 44% (39/88) published overall, with 36% (15/42) of those based on blood testing published, compared with 52% (16/31) of those requiring tissue samples or bone marrow aspirations (Supplementary Table S3). Trials with greater numbers of SEPs were more likely to underpublish their SEPs, defined by a publication rate of 75% or less (OR 1.15; 95% CI, 1.09–1.22; P < 0.0001). This association persisted after adjustment for confounders (adjusted OR 1.16; 95% CI, 1.09–1.22; P < 0.0001; Supplementary Table S4A). Publication also seemed to be related to DRO SEPs; trials with a greater percentage of DRO endpoints were less likely to underpublish, even after adjustment for number of SEPs per trial (adjusted OR 0.30; 95% CI, 0.11–0.85; P = 0.02; Supplementary Table S4B). Other trial-level factors did not seem to be strongly associated with underpublication (Supplementary Table S5A–S5H). Lastly, disease setting (upfront vs. relapsed/refractory; OR 0.90; 95% CI, 0.56–1.45; P = 0.7) and primary publication year (OR 0.97; 95% CI, 0.88–1.07; P = 0.5) were also not associated with the SEP publication rate. Owing to the heterogeneity in SEPs listed between the registry and the protocol, the analysis was repeated looking at the highest fidelity SEPs: only those that were (i) listed on both ClinicalTrials.gov and the protocol and (ii) from trials with multiple protocol versions. These 1,068 SEPs were the most consistently acknowledged in association with their trials, even after protocol amendments. In this sensitivity analysis, 74% of SEPs (794/1,068) were published, and 59% of trials (60/147) published greater than 75% of their SEPs (Supplementary Table S6). The number of SEPs remained associated with underpublication (adjusted OR 1.27; 95% CI, 1.14–1.41; P < 0.0001; Supplementary Table S7). Whereas 31% (792/2,562) of total SEPs were not published, 19% of SEPs (491/2,562) were unpublished but had data reported on ClinicalTrials.gov . SEP data were reported on ClinicalTrials.gov a median of 1 year after the primary publication (IQR: 0–3 years) and a median of 1.5 years before secondary publications containing SEP results . For 8% of SEPs (203/2,562), result data were never reported on ClinicalTrials.gov or published, and no justification was provided as to their unavailability . Of all economic and translational correlative SEPs, 69% (11/16) and 26% (23/88), respectively, were missing, never having been published or reported. In this large-scale analysis of SEPs among phase III oncology clinical trials, the number of SEPs was shown to have considerably increased over time, and the majority of SEPs were shown to be published. However, SEP underpublication is particularly prominent among PROs and translational endpoints. SEP underpublication may present ethical challenges considering patient burden associated with obtaining biospecimens for correlative analyses, as well as the time commitment required for SEP compliance (i.e., PROs; refs. , ). The number of SEPs seems related to underpublication, suggesting that the increasing numbers of SEPs per trial are prohibitive for reliable publication reporting. To appropriately respect the burden placed on patients, as well as limit multiplicity concerns, trialists should thoughtfully weigh the feasibility and practicality of SEPs in conjunction with clinical relevance toward key research questions. Although other studies have focused on more limited sets of endpoints, to the best of our knowledge, this is the first and only comprehensive analysis of all SEPs across a large cohort of phase III oncology trials. Defining SEPs for each trial was challenging, as our thorough manual review found that SEPs were inconsistently recorded across available protocols and the ClinicalTrials.gov registry, in line with previous analyses . Thus, the manually validated diversity of sources used both to initially extract SEPs and track publication data contributed to a more in-depth understanding of the trial landscape, detecting inconsistencies in the handling of SEP data that would not have been possible had only one source been used. Notably, although many unpublished SEPs did ultimately have data reported on ClinicalTrials.gov , ClinicalTrials.gov results were presented without explanation or analysis—and at times, without inferential statistical testing. Therefore, it presents difficulties in interpretation for patients and physicians who are not content matter experts . Our analysis also raised questions about the underpublication of particular data types, especially PROs and correlatives. PROs are crucial to providing the patient’s perspective on tolerability and toxicity and add valuable information beyond physician assessment of adverse events; however, the completion of lengthy questionnaires can be time-consuming and distressing to patients . Survey fatigue from lengthy questionnaires has also been shown to increase respondent attrition rates and compromise response quality . Translational correlatives often require the collection of biological specimens from patients and may be associated with painful and invasive procedures obtained outside the context of routine clinical care. Given the burden such SEPs may place on patients, trials should particularly endeavor to publish these data in a timely manner to aid in the interpretation of the PEP and other SEPs . There are several key limitations to this study. To capture the full range of each trial’s SEPs, we examined only trials with published online protocols, but low protocol availability rates among oncology trials limited our overall sample size . Incomplete protocols and lack of multiple protocol versions may also limit the transparency of the final confirmed SEPs per trial, despite our comprehensive examination of publicly available data across the trial protocols, publications, and ClinicalTrials.gov . To account for the standard study procedure of editing SEPs after initial trial design, we chose not to examine SEPs that were removed in later protocol amendments. However, these may have already been evaluated on patients, thus contributing further to the effect size of underpublication. Additionally, data that were published through nonpeer-reviewed mechanisms such as lay press or company websites were not examined under the scope of our study, although such data would potentially be available to patients. Further follow-up time could lead to higher rates of SEP publication as data matures and secondary articles are released, although a minimum of 8 years after the study start year was provided for each trial. In summary, this comprehensive examination of the oncology clinical trial landscape highlights the imperative of SEP publication and transparency across all endpoint types. At the time of trial design, SEPs should be thoughtfully selected to those that are biologically plausible and supported by other clinical evidence or rationales, while being conscientious of the burden on patients. To truly promote transparency surrounding these endpoints, trials should endeavor to publish complete protocols and amendments, ideally in the form of first and last or summary of changes. Finally, all prespecified endpoints should be published on a reasonable timeline; when that is not possible, the rationale for nonreporting should be provided. Supplemental Figure S1 Structural casual model of the relationship between the number of SEPs, confounding variables, and the percent of SEPs published. Orange represents the exposure of interest (number of SEPs), yellow represents the outcome of interest (percent of SEPs published), and the red arrow indicates the causal path. Green circles indicate confounders, blue circles indicate non-confounding ancestors of the exposure and outcome. Black arrows represent biasing pathway. Supplemental Table S1 Comparison of publication rates by SEP detection method. Supplemental Table S2 Distribution of the percentage of PROs published per trial. Supplemental Table S3 Distribution of the types of correlatives and their respective publication rates. Supplemental Table S4 Full multivariable model evaluating the association between significant trial-level factors and the percentage of SEPs published. Supplemental Table S5 Full multivariable model evaluating the association between nonsignificant trial-level factors and the percentage of SEPs published. Supplemental Table S6 Comparison of publication and reporting between the overall dataset and the sensitivity analysis restricting SEPs to only those from both the protocol and ClinicalTrials.Gov, from trials with multiple protocols available. Supplemental Table S7 Full multivariable model evaluating the association between the number of SEPs and the percentage of SEPs published among sensitivity analysis SEPs from both the protocol and ClinicalTrials.Gov, from trials with multiple protocols available. |
Cortical and subcortical p‐tau density in CTE: Comparison to Alzheimer’s disease | 24938b91-587c-43b0-8fa9-9ae476a08315 | 11716140 | Forensic Medicine[mh] | |
Direct modulation index: A measure of phase amplitude coupling for neurophysiology data | d1a62302-3f94-446b-89a8-8aa26fb1baa8 | 9980882 | Physiology[mh] | INTRODUCTION The investigation of the communication between structures across different spatial and temporal scales has been a major area of interest in the field of cognitive and motor neuroscience (Siems et al., ; Siems & Siegel, ). In particular, a growing body of research regards phase‐amplitude coupling (PAC) as a phenomenon reflective of a multi‐frequency communication mode across and within neural structures (Canolty & Knight, ; Jensen & Colgin, ). The level of PAC between neural structures is quantified by the degree to which the phase of a low‐frequency neural oscillation reflects the shape of the amplitude of a high‐frequency oscillation (Bragin et al., ; Lakatos et al., ). Several studies have reported a close relationship between PAC of high‐gamma amplitude with alpha phase and behavioral performance on cognitive, motor, and sensory tasks (e.g., Schroeder & Lakatos, ; Voytek et al., ; Yanagisawa et al., ). Given the large interest in PAC, many methods have been developed to provide an accurate quantification of this cross‐frequency neural communication (Tort et al., ). A non‐exhaustive overview of established methods is presented in Table . The methods include the modulation index (MI; Tort et al., ), mean vector length (MVL; Canolty et al., ) and phase locking value (PLV; Mormann et al., ). Although these methods were successful in revealing relevant brain‐behaviour relationships (e.g., Canolty & Knight, ; Penny et al., ), they share two main limitations. First, none of these methods provides a bounded output measure, which prohibits the interpretation of absolute PAC values across different studies (Hülsemann et al., ; Tort et al., ). Second, some methods may erroneously detect high PAC values at harmonic multiples of the frequency of the enveloped amplitude signal (e.g., Giehl et al., ; Kramer et al., ), as suggested by the findings of the present investigation. This work introduces the direct modulation index (dMI) as a novel measure of PAC, which aims to circumvent the limitations of the aforementioned methods. The dMI is a bound variation of the MI as introduced by Tort et al. . A dMI value of 1 indicates strong PAC, while a value of 0 indicates no PAC. Furthermore, dMI is highly sensitive to the target frequency only, and therefore avoids the pitfall of assuming significant PAC changes at harmonic frequencies to be actual findings. In the next section, we begin with a description of the proposed measure, followed by an illustration of its performance in comparison to a selection of established connectivity methods on simulated data. METHODS 2.1 Direct modulation index With the modulation index as its first step, the dMI shares the calculation of a phase‐amplitude histogram (Tort et al., ). Following preprocessing, which includes bandpass filtering; a phase‐amplitude histogram is constructed across the entire duration of the input signal by extracting the phase of the low‐frequency signal and the amplitude of the high‐frequency signal. In the current implementation, the Hilbert transform was used to extract the phase of the low‐frequency signal, while the amplitude of the high‐frequency signal was estimated using the rectified signal rather than the Hilbert‐transform in order to speed up processing. The original study by Tort et al. constructed the phase‐amplitude histogram using 18 phase bins of 20° wide. Conversely, we opted to use 360 overlapping phase bins of 20° width each, shifted in steps of 1°, in order to obtain a better model fit in the following step. Nevertheless, comparisons between the dMI to the MI, both calculated with the original 18 bins of 20° each have been added in Supplementary Materials 1. A composite signal is then constructed from the phase of the low‐frequency signal and the amplitude of the high‐frequency signal, and the mean amplitude is calculated across phase bins. In the next step, the phase‐amplitude histogram is normalized to produce a more robust fit in the following step. To ensure that it is more resistant towards outliers, the 25 th and 75 th percentiles–as opposed to the minimum and maximum values–are chosen as reference points for the normalization. The values are selected based on the interquartile range, and have been shown to be robust against outliers when the proportion of outliers is less than 25% (Jones, ). Next, the values are scaled and shifted to result in a normalized histogram which is loosely bound to the interval [−1, 1]. Afterwards, instead of scoring the PAC values from entropy, a sinusoid is fitted to the normalized histogram. We used a non‐linear least‐squares algorithm, implemented in the LMFIT package in Python (Newville et al., ). The frequency of the sine is set to 1 cycle (per 360°), while the phase and amplitude are preset to 0 and 1, respectively. During the fit, the phase is allowed to vary between −180 and +180°, and the amplitude is allowed to vary between 0.95 and 1.05 in order to obtain the best fit. We selected a sinusoidal function because the phase‐amplitude histogram of two signals with an ideal PAC relationship was observed to default to a sinusoid shape. Fitting a sinusoid through the phase‐amplitude histogram renders the measure highly sensitive to the targeted frequency only, whereas the entropy metric is also sensitive to the harmonic frequencies. Finally, an error value is calculated by taking the squared difference between the height of each individual phase bin and the amplitude of the sinusoidal fit at the corresponding bin. The errors are averaged across phase bins, capped at 1, and then subtracted from 1 to arrive at the dMI. This final step sets the lower and upper bounds of the dMI to 0 and 1, respectively. It simplifies the interpretation of the PAC estimate, as values approaching zero indicate low‐coupling strength, while values approaching 1 indicate strong coupling. 2.2 Validation data dMI was evaluated in comparison to the following PAC methods: MI (Tort et al., , ), MVL (Canolty et al., ), and PLV (Mormann et al., ). The reader may refer to the corresponding articles for a description of the evaluated measures. dMI and the aforementioned PAC methods were evaluated using simulated data. As opposed to using experimental data–where it is unclear whether any detected PAC at harmonic frequencies is reflective of a true coupling–simulated data enabled us to absolutely determine any signal properties, including the confirmable absence of PAC at the higher harmonic frequencies. We generated a high frequency signal of 200 Hz with an amplitude that was modulated by a 10 Hz oscillation (Figure ), and several low frequency signals of 2, 5, 10, 15, 20, 25, and 30 Hz (Figure ). Each signal had a duration of 30 s and a sampling rate of 1000 Hz. A high PAC value was anticipated for the 10 Hz low frequency signal, but not for the other low frequency signals (Figure ). PAC was modeled to be consistenly present throughout the signal without fluctuations in strength. To test the performance of each PAC method at different signal‐to‐noise ratios, Gaussian noise, with amplitudes that are 0%, 25%, 50%, 100%, or 150% of the amplitude of the signal, was introduced. To additionally investigate the combined effects of noise level and signal duration on each PAC method, the analysis was repeated with a signal length of 300 s. Investigations of the effect of noise with additional signal durations of 0.5 s and 1 s have been included in Supplementary Materials 2. Finally, the performance of each method with varying signal durations was tested by capping the signals at durations of 500, 600, 700, 800, 900, and 1000 ms. In this evaluation, the amplitude of the Gaussian noise was fixed at 33% of the signal amplitude. Direct modulation index With the modulation index as its first step, the dMI shares the calculation of a phase‐amplitude histogram (Tort et al., ). Following preprocessing, which includes bandpass filtering; a phase‐amplitude histogram is constructed across the entire duration of the input signal by extracting the phase of the low‐frequency signal and the amplitude of the high‐frequency signal. In the current implementation, the Hilbert transform was used to extract the phase of the low‐frequency signal, while the amplitude of the high‐frequency signal was estimated using the rectified signal rather than the Hilbert‐transform in order to speed up processing. The original study by Tort et al. constructed the phase‐amplitude histogram using 18 phase bins of 20° wide. Conversely, we opted to use 360 overlapping phase bins of 20° width each, shifted in steps of 1°, in order to obtain a better model fit in the following step. Nevertheless, comparisons between the dMI to the MI, both calculated with the original 18 bins of 20° each have been added in Supplementary Materials 1. A composite signal is then constructed from the phase of the low‐frequency signal and the amplitude of the high‐frequency signal, and the mean amplitude is calculated across phase bins. In the next step, the phase‐amplitude histogram is normalized to produce a more robust fit in the following step. To ensure that it is more resistant towards outliers, the 25 th and 75 th percentiles–as opposed to the minimum and maximum values–are chosen as reference points for the normalization. The values are selected based on the interquartile range, and have been shown to be robust against outliers when the proportion of outliers is less than 25% (Jones, ). Next, the values are scaled and shifted to result in a normalized histogram which is loosely bound to the interval [−1, 1]. Afterwards, instead of scoring the PAC values from entropy, a sinusoid is fitted to the normalized histogram. We used a non‐linear least‐squares algorithm, implemented in the LMFIT package in Python (Newville et al., ). The frequency of the sine is set to 1 cycle (per 360°), while the phase and amplitude are preset to 0 and 1, respectively. During the fit, the phase is allowed to vary between −180 and +180°, and the amplitude is allowed to vary between 0.95 and 1.05 in order to obtain the best fit. We selected a sinusoidal function because the phase‐amplitude histogram of two signals with an ideal PAC relationship was observed to default to a sinusoid shape. Fitting a sinusoid through the phase‐amplitude histogram renders the measure highly sensitive to the targeted frequency only, whereas the entropy metric is also sensitive to the harmonic frequencies. Finally, an error value is calculated by taking the squared difference between the height of each individual phase bin and the amplitude of the sinusoidal fit at the corresponding bin. The errors are averaged across phase bins, capped at 1, and then subtracted from 1 to arrive at the dMI. This final step sets the lower and upper bounds of the dMI to 0 and 1, respectively. It simplifies the interpretation of the PAC estimate, as values approaching zero indicate low‐coupling strength, while values approaching 1 indicate strong coupling. Validation data dMI was evaluated in comparison to the following PAC methods: MI (Tort et al., , ), MVL (Canolty et al., ), and PLV (Mormann et al., ). The reader may refer to the corresponding articles for a description of the evaluated measures. dMI and the aforementioned PAC methods were evaluated using simulated data. As opposed to using experimental data–where it is unclear whether any detected PAC at harmonic frequencies is reflective of a true coupling–simulated data enabled us to absolutely determine any signal properties, including the confirmable absence of PAC at the higher harmonic frequencies. We generated a high frequency signal of 200 Hz with an amplitude that was modulated by a 10 Hz oscillation (Figure ), and several low frequency signals of 2, 5, 10, 15, 20, 25, and 30 Hz (Figure ). Each signal had a duration of 30 s and a sampling rate of 1000 Hz. A high PAC value was anticipated for the 10 Hz low frequency signal, but not for the other low frequency signals (Figure ). PAC was modeled to be consistenly present throughout the signal without fluctuations in strength. To test the performance of each PAC method at different signal‐to‐noise ratios, Gaussian noise, with amplitudes that are 0%, 25%, 50%, 100%, or 150% of the amplitude of the signal, was introduced. To additionally investigate the combined effects of noise level and signal duration on each PAC method, the analysis was repeated with a signal length of 300 s. Investigations of the effect of noise with additional signal durations of 0.5 s and 1 s have been included in Supplementary Materials 2. Finally, the performance of each method with varying signal durations was tested by capping the signals at durations of 500, 600, 700, 800, 900, and 1000 ms. In this evaluation, the amplitude of the Gaussian noise was fixed at 33% of the signal amplitude. RESULTS Figure shows the PAC estimates by dMI, MI, MVL, and PLV at various levels of SNR and signal durations. A summary of the performance of the measures can be found in Table . Figure shows the PAC estimates obtained using dMI, MI, MVL, and PLV. A consistent, single peak at 10 Hz can be observed for dMI, with a PAC value of 0.94. Furthermore, an elevated dMI value of 0.064 can be observed at 2 Hz. PAC estimates from MVL showed a prominent peak at 10 Hz, with a value of 0.24. Furthermore, elevated MVL values are also observed at 5 and 25 Hz, with values of 0.001 and 0.006, respectively. PLV‐based estimates showed the largest peak at 10 Hz with a PAC value of 0.54. However, at frequencies of 20 and 30 Hz, prominent peaks of 0.13 and 0.06, respectively, also appeared. Finally, MI‐based estimates showed the highest PAC values of 9.70 at 10 Hz, as well as elevated values of 7.30 and 7.96 at the lower frequencies of 2 Hz and 5 Hz, respectively. Figure shows the effect of varying levels of Gaussian noise on the PAC estimates of the investigated measures for signal lengths of 30 s and 300 s. A general reduction in PAC values can be observed for increasing noise levels. PAC estimates based on MI and PLV decreased most (61% and 51%, respectively) in the presence of weak noise (25% of the signal amplitude). By comparison, the MVL‐based estimates decreased by 25%, and the dMI‐based estimates decreased by 0.29% only. Also, across different Gaussian noise levels, dMI drop rates were consistently lower than those of the other evaluated measures. Furthermore, increasing the signal length to 300 s increased the robustness against noise for the dMI‐based PAC estimates, but not for the other measures (Figure , bottom row). Finally, the erroneously elevated PAC values of MI and PLV were also present in the analysis across different levels of Gaussian noise. Figure shows the effect of signal length on the PAC estimates of the measures investigated. The results were similar to those reported in the previous investigation of the effect of noise. Decreasing the signal length did not have an effect on dMI‐based PAC estimates. When using the MI to estimate PAC, small fluctuations were observed as an effect of signal length. MVL and PLV appeared to be most affected by signal length, especially for the low frequency signals of 2 Hz and 5 Hz. The erroneously elevated PAC values of MI and PLV were also present in the analysis across different signal lengths. DISCUSSION The current work presents the dMI as a novel measure of PAC. Our dMI has been designed to be easily interpretable on a stationary interval between 0 and +1, and specific to the frequency of interest only. The performance of dMI, PLV, MVL, and MI was investigated using artificial data under increasing levels of noise and with decreasing amounts of data. The results indicate that dMI is more robust towards varying levels of Gaussian noise and short signal durations than the other PAC methods investigated in the scope of our evaluations (Table ). The dMI measure, as well as other reliable measures to estimate neurophysiological interactions for example, in the same frequency band (Scherer et al., ), is freely available as a Python toolbox at https://github.com/neurophysiological-analysis/FiNN (Scherer et al., ). First, one characteristic specific to dMI is that its PAC estimates are bound to a stationary interval between 0 and 1, as opposed to the other investigated methods where the estimates can theoretically take on a wide range of values. Bounding the output to a specific, stationary interval allows for the interpretation of absolute changes in PAC. This, in turn, facilitates the calculation of meaningful effect sizes in statistical investigations. A combination of these two pieces of information is essential for any meaningful interpretation and for the discussion of any results (Lakens, ; Stankovski et al., ). Second, dMI was found to be highly resilient towards high levels of Gaussian noise and performed well with decreasing amounts of data within the extent of the current investigations. By contrast, PAC estimates from MI, PLV, and MVL strongly deteriorated as the levels of noise increased. While the decreasing signal duration had no effect on dMI, for PLV it led to a high number of erroneously elevated PAC estimates at lower frequencies. Finally, in our investigations, we observed that MI tends to systematically overestimate PAC values for frequencies below the target frequency. This may be related to the scoring mechanism of MI, which calculates the entropy of the phase‐amplitude histogram. This entropy was still high when low levels of Gaussian noise were added. MVL‐based PAC estimates also showed poor performance at frequencies lower than the target frequency, in particular for shorter signal lengths, as it returned a high number of erroneously elevated PAC estimates within that range. The dMI, while minimal and little affected by noise or signal length, slightly overestimated PAC at 2 Hz. This may be related to the fit of the sinusoid to the phase‐amplitude histogram, which approaches the histogram as the frequency decreases. On the other hand, both PLV and MVL tended to systematically overestimate PAC values at the harmonics of the target frequency. These observations underline the sensitivity of PAC estimates, and the need for further investigation towards a reliable solution. However, it is important to bear in mind that these observations are potentially biased, as the results are derived from artificially created data. The current implementation of dMI assumes a sinusoidal distribution of amplitudes across the individual phase bins. This assumption is likely to hold, provided a sufficiently large sample size with independent measurements is available (Nixon et al., ). Our implementation of dMI also enables the user to easily visualize the phase‐amplitude histogram to understand the shape of the PAC fit. In the event that the observed histogram is not Gaussian, the user may conveniently define another function for the shape for the line‐fitting. CONCLUSIONS Here, we presented dMI as a new measure to estimate PAC of neurophysiological data on the basis of a sinusoidal fit of the phase‐amplitude histogram between two signals. The dMI has been designed to be resistant against varying levels of noise, to perform well with short signal durations, and to be easily interpretable due to the absolute boundary values of 0 and +1. Furthermore, through configurations of the parameters and/or changing of the sinusoidal scoring function, dMI is easily adaptable to the question at hand. We used simulated data to show that dMI provides a more reliable estimate of PAC than a number of other established measures. This novel measure may therefore provide a useful tool for the investigation of brain dynamics with implications for basic and clinical science. Future studies are required to test the performance of dMI in real‐life signal processing scenarios in comparison to the other PAC metrics. AUTHOR CONTRIBUTION Maximilian Scherer: Conceptualization, Methodology, Software, Writing–original draft, Writing–review & editing. Tianlu Wang: Writing–original draft, Writing–review & editing. Robert Guggenberger: Conceptualization, Methodology, Writing–review & editing. Luka Milosevic: Conceptualization, Methodology, Writing–review & editing. Alireza Gharabaghi: Conceptualization, Writing–review & editing, Funding acquisition. Maximilian Scherer: Conceptualization, Methodology, Software, Writing–original draft, Writing–review & editing. Tianlu Wang: Writing–original draft, Writing–review & editing. Robert Guggenberger: Conceptualization, Methodology, Writing–review & editing. Luka Milosevic: Conceptualization, Methodology, Writing–review & editing. Alireza Gharabaghi: Conceptualization, Writing–review & editing, Funding acquisition. The authors declare no conflict of interest. Data S1: Supporting Information Click here for additional data file. |
The role of community engagement in promoting research participants’ understanding of pharmacogenomic research results: Perspectives of stakeholders involved in HIV/AIDS research and treatment | 5e1399cb-b4fa-49e3-9485-d18afafa067b | 10986979 | Pharmacology[mh] | There is an increase in pharmacogenomic research in sub-Saharan Africa aimed at improving HIV treatment . In Uganda, there are about 1.5 million people living with HIV (PLHIV) and 28,000 dying of AIDS-related illnesses annually . Pharmacogenomic research involves studying how an individual’s genes precisely influence their response to a given medication including drug efficacy, adverse events, and dosing requirements . Pharmacogenomic research can present a vast amount of information , some of which may be pleiotropic, where a single gene can contain information about several phenotypes, resulting into potential incidental findings , in addition to primary results. Primary results are defined as findings responding to a well-defined research question, while incidental findings are any results discovered unintentionally and are not related to the primary research question, but may have significant implications for the health or well-being of the participant and family members . Researchers and clinicians may use results from pharmacogenomic research analyses to determine the most appropriate drug and dosing requirements for an individual. Similarly, these results are equally important to research participants and communities involved in pharmacogenomic and other genomic research studies in determining their best treatment options. Studies have reported a high demand from research participants for their individual results from pharmacogenomic and genomic research . A recent study that explored the factors influencing the preferences and reasons for the desire to receive individual results from pharmacogenomic research among people living with HIV showed that 98% wanted to receive all their primary results . However, several studies have reported participants’ inadequate understanding of genomic and pharmacogenomic research information including results across the globe . This has been attributed to the complexity of genomic terms and the absence of direct translations of genomic terms in many African local languages where genomic studies are being conducted . A study conducted among PLHIV reported that only 23% of the participants enrolled in clinical trials that included a pharmacogenomic component had adequate understanding of the information disclosed to them during the consenting process . Inadequate understanding of research information is a barrier to participants’ informed decision-making for participation, and determining the kind of results they would like to receive. To promote adequate understanding of genomic and pharmacogenomic research information and findings, CE has been proposed by several researchers . They suggested that communities should be actively engaged in research activities right from proposal development to result dissemination. CE is considered one of the platforms where research participants can learn more about how individuals’ genes interact with drugs, an opportunity to clarify the research goal and objectives, and understand the appropriate CE approaches and strategies specific to a given research project . CE also facilitates co-learning between the researchers and research communities promoting transparency and free sharing, hence building long-term relationships based on mutual respect and trust . The Uganda National Council for Science and Technology (UNCST) guidelines for Community Engagement in Research recommend using one or more approaches, for example, formative consultations, existing structures and groups, community leaders, community events, mass media, and community advisory boards (CABs) when engaging research communities . In addition, the H3Africa guidelines of community engagement for genomic research and biobanking in Africa recommend adopting responsive and flexible CE strategies that are shaped by the participants and their communities’ experiences . These guidelines emphasize that communities are not homogeneous, so researchers ought to employ creative approaches that minimize the potential risks to participants and research communities when communicating pharmacogenomic and genomic information. It is important to note that individual pharmacogenomic and genomic research results may present findings with ethical, legal and social implications that may not only affect research participants but also extend to families and research communities . Therefore, inappropriate approaches of communicating pharmacogenomic research results at community and individual levels might raise concerns such as loss of privacy and breach of confidentiality, which may stigmatize research participants and their family members. Yet, there is limited literature on the suitable CE approaches for communicating simple and understandable pharmacogenomic information including results at community level. Therefore, this study explored stakeholders’ perspectives on the role of CE in promoting understanding of pharmacogenomic research results among PLHIV. We present findings on CE approaches that can enhance participants’ understanding of pharmacogenomic information and findings. We hope that findings from this study might inform institutional and national guidelines for returning genomic and genetic research results to people living with HIV, their families and communities. Study design and setting This cross-sectional study used a qualitative exploratory approach . The study was conducted at Makerere University College of Health Sciences (MakCHS) and five affiliated research institutions located on Mulago Hill. College of Health Sciences is one of the nine constituent colleges of Makerere University, with vast experience in HIV/AIDS research, including pharmacogenomics. We also considered three of the five accredited research ethics committees (RECs), housed at MakCHS, that had experience reviewing pharmacogenomic research specifically, for HIV treatment. In addition, we included three Ugandan RECs with prior experience in reviewing pharmacogenomic research in HIV. Research team The research team comprised of a social scientist, bioethicists, a medical anthropologist, and medical scientists with experience in conducting and analyzing qualitative data. Study participants In this study, we selected three categories of stakeholders in pharmacogenomic research for HIV treatment who were researchers, members of research ethics committees and community representatives. We purposively selected 15 researchers involved in pharmacogenomics research for HIV treatment for the period 2018–2021 and based at MakCHS or affiliate research institutes as eligible for enrolment in this study. Of these, 12 consented to participate, while three declined, citing inadequate time to participate in our study. We selected 12 REC members with experience in reviewing proposals for pharmacogenomic research in HIV treatment. Six were REC chairpersons and the rest were members of the RECs representing the research communities. Three of the REC chairpersons preferred to nominate a member of their REC who was more knowledgeable in the field of study. We also conducted five deliberative focus group discussions (dFGDs), each comprising of six community representatives from the five HIV research institutions affiliated with MakCHS. This study was conducted between September 2021 and February 2022. All participants were purposively selected and were above 18 years of age. Before conducting the study interviews, the research team was trained on the protocol to ensure that they understood the study well. Study procedure Considering the restrictions that prevailed during the COVID-19 pandemic, some data collection activities were conducted virtually. Researchers and REC members who were involved in or had reviewed HIV pharmacogenomic research respectively, were contacted by email, which contained a brief description of the study and a request to schedule an appointment for the interview. Expression of interest to participate was recorded by a positive response to the email, followed by sharing the consent form. The consent forms were either signed electronically or by a written email notification of acceptance to participate in this study. Appointments were scheduled and conducted virtually via Zoom at the participants’ convenience. Interviews were audio-recorded and lasted between 30 and 40 minutes. Five dFGDs were conducted with community representatives. The community representatives were contacted through their respective leaders. The dFGDs were conducted in two languages, English and Luganda, the commonly spoken language in Central Uganda. The dFGDs were conducted through face-to-face interactions after the national lockdown was lifted during the COVID-19 pandemic, with appropriate mitigation measures in place. Prior to the dFGDs, each community representative received a consent form document in the language of their choice. One of the research team members obtained written consent from each community representative after discussions on the various components of the study. At the beginning of each dFGD, participants were provided with an overview of how antiretroviral drugs interact with human genes. This was followed by a vignette describing a hypothetical scenario of the possible results that could be generated from pharmacogenomic research, and these included primary results and incidental findings. Prior information helped participants to gain an understanding of pharmacogenomic research and the different kinds of results that could emerge from it. Clarifications were offered prior to and during the discussions. A team of four researchers (SN, AT, CW and ESM) conducted all interviews to ensure consistency. A note-taker was present throughout the discussions to back up the electronic recording. The dFGDs were audio-recorded and lasted about 60–90 minutes. Both dFGD and interview guides were developed from the literature and subsequently revised to capture new emerging topics. The topics of discussion included questions related to the potential benefits of community engagement, how to effectively communicate pharmacogenomic research information including primary results and incidental findings at community level, how to achieve community consensus when determining the kind of results to be returned, perceived challenges of enganging communities in the feedback process of results, and the perceived roles of each stakeholder in promoting research participants’ understanding of pharmacogenomic research results. The guides were first piloted on three genomic researchers, two REC members and three HIV peer support members who were excluded from the study. However, their feedback was used to improve the interview guides. The research team held debriefing meetings at the end of each interview to identify new perspectives that were not initially captured by the tool. Data were collected until no new information or insights were being revealed. Data analysis Data were analysed continuously throughout the study using a thematic approach . All audio recordings were transcribed verbatim. Transcripts of the community representatives were translated from Luganda to English. All transcripts were verified for accuracy by reading word by word while listening to the audio recordings for quality checks and spelling errors. This step helped the authors to familiarize, mark and memo the data. Transcripts were initially analyzed separately based on the three participant categories: researchers, REC members and community representatives, in order to gain a deeper understanding of each category’s perceptions on the subject matter. Three authors (SN, AT, and ESM) selected three transcripts from each participant category for open coding. These scripts were read line by line to generate the first set of codes. Synthesis of codes from the independent reading were iteratively discussed among the three authors and codes of similar ideas were merged. Differences in coding among the independent coders were resolved by consensus. A hierarchal codebook and coding framework was developed for each participant category by the three authors to guide the analysis of the data. The hierarchy of codes was then sorted into categories based on how themes were related and linked. We then deductively generated themes using our pre-existing analytic framework, which we developed from the literature on the role of community engagement in genomic research, as represented in the interview guides. We also inductively considered new themes that merged from the transcripts. All the transcripts were then imported into Nvivo version 12 and coded by three authors (SN, AT, and EMS). Three authors DK, IM and CW examined the themes for patterns consistency until consensus was achieved on the final themes. All the authors compared the emergent themes with the existing literature to confirm that the final themes accurately represented the stakeholders’ perspectives on the role of community engagement in promoting participants’ understanding of pharmacogenomic research results. We also returned some transripts to the stakeholders to verify whether the data collected was a true reflection of their statements on the subject matter. This ensured that the data can be transferable to similar settings and enhanced the credibility of the study findings . The key findings were summarized and the overlapping themes across the three categories of participants were merged as presented in Table 2. The final codebook included the merged themes and codes from all three categories of participants. It was continuously refined to establish the themes presented in the results section. Regarding research reflexivity, the research team was aware that we needed to remain neutral throughout the interviews and focus group discussions. We acknowledge our potential biases based on the prior knowledge about the research institutions where we recruited the research stakeholders and the existing relationships between the interviewees and the research team through prioritizing listening from the interviewees’ perspective. Ethical consideration This study obtained ethics approval from the Makerere University School of Biomedical Sciences Higher Degrees and Research Ethics Committee (SBS- 855) and Uganda National Council for Science and Technology (SS 735ES). Written informed consent was obtained from all stakeholders, and they were assured of confidentiality. This cross-sectional study used a qualitative exploratory approach . The study was conducted at Makerere University College of Health Sciences (MakCHS) and five affiliated research institutions located on Mulago Hill. College of Health Sciences is one of the nine constituent colleges of Makerere University, with vast experience in HIV/AIDS research, including pharmacogenomics. We also considered three of the five accredited research ethics committees (RECs), housed at MakCHS, that had experience reviewing pharmacogenomic research specifically, for HIV treatment. In addition, we included three Ugandan RECs with prior experience in reviewing pharmacogenomic research in HIV. The research team comprised of a social scientist, bioethicists, a medical anthropologist, and medical scientists with experience in conducting and analyzing qualitative data. In this study, we selected three categories of stakeholders in pharmacogenomic research for HIV treatment who were researchers, members of research ethics committees and community representatives. We purposively selected 15 researchers involved in pharmacogenomics research for HIV treatment for the period 2018–2021 and based at MakCHS or affiliate research institutes as eligible for enrolment in this study. Of these, 12 consented to participate, while three declined, citing inadequate time to participate in our study. We selected 12 REC members with experience in reviewing proposals for pharmacogenomic research in HIV treatment. Six were REC chairpersons and the rest were members of the RECs representing the research communities. Three of the REC chairpersons preferred to nominate a member of their REC who was more knowledgeable in the field of study. We also conducted five deliberative focus group discussions (dFGDs), each comprising of six community representatives from the five HIV research institutions affiliated with MakCHS. This study was conducted between September 2021 and February 2022. All participants were purposively selected and were above 18 years of age. Before conducting the study interviews, the research team was trained on the protocol to ensure that they understood the study well. Considering the restrictions that prevailed during the COVID-19 pandemic, some data collection activities were conducted virtually. Researchers and REC members who were involved in or had reviewed HIV pharmacogenomic research respectively, were contacted by email, which contained a brief description of the study and a request to schedule an appointment for the interview. Expression of interest to participate was recorded by a positive response to the email, followed by sharing the consent form. The consent forms were either signed electronically or by a written email notification of acceptance to participate in this study. Appointments were scheduled and conducted virtually via Zoom at the participants’ convenience. Interviews were audio-recorded and lasted between 30 and 40 minutes. Five dFGDs were conducted with community representatives. The community representatives were contacted through their respective leaders. The dFGDs were conducted in two languages, English and Luganda, the commonly spoken language in Central Uganda. The dFGDs were conducted through face-to-face interactions after the national lockdown was lifted during the COVID-19 pandemic, with appropriate mitigation measures in place. Prior to the dFGDs, each community representative received a consent form document in the language of their choice. One of the research team members obtained written consent from each community representative after discussions on the various components of the study. At the beginning of each dFGD, participants were provided with an overview of how antiretroviral drugs interact with human genes. This was followed by a vignette describing a hypothetical scenario of the possible results that could be generated from pharmacogenomic research, and these included primary results and incidental findings. Prior information helped participants to gain an understanding of pharmacogenomic research and the different kinds of results that could emerge from it. Clarifications were offered prior to and during the discussions. A team of four researchers (SN, AT, CW and ESM) conducted all interviews to ensure consistency. A note-taker was present throughout the discussions to back up the electronic recording. The dFGDs were audio-recorded and lasted about 60–90 minutes. Both dFGD and interview guides were developed from the literature and subsequently revised to capture new emerging topics. The topics of discussion included questions related to the potential benefits of community engagement, how to effectively communicate pharmacogenomic research information including primary results and incidental findings at community level, how to achieve community consensus when determining the kind of results to be returned, perceived challenges of enganging communities in the feedback process of results, and the perceived roles of each stakeholder in promoting research participants’ understanding of pharmacogenomic research results. The guides were first piloted on three genomic researchers, two REC members and three HIV peer support members who were excluded from the study. However, their feedback was used to improve the interview guides. The research team held debriefing meetings at the end of each interview to identify new perspectives that were not initially captured by the tool. Data were collected until no new information or insights were being revealed. Data were analysed continuously throughout the study using a thematic approach . All audio recordings were transcribed verbatim. Transcripts of the community representatives were translated from Luganda to English. All transcripts were verified for accuracy by reading word by word while listening to the audio recordings for quality checks and spelling errors. This step helped the authors to familiarize, mark and memo the data. Transcripts were initially analyzed separately based on the three participant categories: researchers, REC members and community representatives, in order to gain a deeper understanding of each category’s perceptions on the subject matter. Three authors (SN, AT, and ESM) selected three transcripts from each participant category for open coding. These scripts were read line by line to generate the first set of codes. Synthesis of codes from the independent reading were iteratively discussed among the three authors and codes of similar ideas were merged. Differences in coding among the independent coders were resolved by consensus. A hierarchal codebook and coding framework was developed for each participant category by the three authors to guide the analysis of the data. The hierarchy of codes was then sorted into categories based on how themes were related and linked. We then deductively generated themes using our pre-existing analytic framework, which we developed from the literature on the role of community engagement in genomic research, as represented in the interview guides. We also inductively considered new themes that merged from the transcripts. All the transcripts were then imported into Nvivo version 12 and coded by three authors (SN, AT, and EMS). Three authors DK, IM and CW examined the themes for patterns consistency until consensus was achieved on the final themes. All the authors compared the emergent themes with the existing literature to confirm that the final themes accurately represented the stakeholders’ perspectives on the role of community engagement in promoting participants’ understanding of pharmacogenomic research results. We also returned some transripts to the stakeholders to verify whether the data collected was a true reflection of their statements on the subject matter. This ensured that the data can be transferable to similar settings and enhanced the credibility of the study findings . The key findings were summarized and the overlapping themes across the three categories of participants were merged as presented in Table 2. The final codebook included the merged themes and codes from all three categories of participants. It was continuously refined to establish the themes presented in the results section. Regarding research reflexivity, the research team was aware that we needed to remain neutral throughout the interviews and focus group discussions. We acknowledge our potential biases based on the prior knowledge about the research institutions where we recruited the research stakeholders and the existing relationships between the interviewees and the research team through prioritizing listening from the interviewees’ perspective. This study obtained ethics approval from the Makerere University School of Biomedical Sciences Higher Degrees and Research Ethics Committee (SBS- 855) and Uganda National Council for Science and Technology (SS 735ES). Written informed consent was obtained from all stakeholders, and they were assured of confidentiality. The demographic characteristics of stakeholders are presented in . A total of 54 participants participated in this study. The majority had more than five years’ experience in pharmacogenomic research and HIV care and treatment. Summary of the themes and key findings A summary of the key themes that emerged from the data collected as described in . I Benefits of engaging communities prior to returning individual pharmacogenomic research results II Obtaining community consensus on the kinds of pharmacogenomic results to be returned. III Opinions on how pharmacogenomic research information and results should be communicated at community and individual levels IV Perceived roles of community stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results V Perceived challenges of engaging communities when returning individual results to research participants Benefits of engaging communities prior to returning individual pharmacogenomic research results to participants Majority of the stakeholders (47) lauded CE for promoting understanding of individual pharmacogenomic results, but emphasized the need to engage community members right from proposal development. Stakeholders considered CE an opportunity to create platforms where participants and community members can freely interact with researchers throughout the research period. In addition, engaging communities was recognized for enabling researchers to learn about communities’ cultural values and local explanations of disease experiences. In return, participants and communities also learn about the individual variability of drug response, thus promoting adequate understanding of pharmacogenomic research and the implications of research results. The continuous interactions between researchers, participants and communities were also said to foster lasting relationships based on mutual respect and trust, and eliminate misconceptions about genomics research. [……]As long as you keep giving information to people about unclear things…they can easily trust you . This will help reduce on many community misunderstandings and misconceptions about genomic research . (KII_Male_Researcher_12) You see , you need to engage the communities early enough . When you do so , many will be given an opportunity to understand the goal of pharmacogenomic research and its implications even if they may not necessarily participate , but they will have learnt about these genetics hard concepts . So here , you see , these results do not necessarily help the participants alone but the whole community who have very many questions about this topic… (FGD 2_Male_Community representative 3) One community representative mentioned that community engagement can promote ownership of findings of pharmacogenomic research and foster community solidarity, where community members may contribute to the financial and social support for the less privileged individuals. When they receive the research information and results as a family or as community , they will be more willing to learn more about genes and drug interactions…and then take necessary action . I have even seen sometimes , people in communities come together to collect finances for their friends to go abroad for expensive medications which they [patients] cannot afford on their own . (FGD 1_Female_Community representative 4) Majority of the stakeholders across the different participant categories (38) acknowledged community engagement as an important strategy to overcome societal stigmatization of individuals or families who may not respond well to the available HIV treatment regimen provided by the Government of Uganda. Most of the time people in communities believe that individuals or families that are different from the majority of the population are cursed or are being punished by God . But when you discuss these [genetics] issues together with them [community members] as a group , many will appreciate the importance of having a genetically diverse population (KII_Male_Researcher_6). Obtaining community consensus on the kinds of pharmacogenomic results to be returned We asked stakeholders about the various ways of obtaining community consensus on the kinds of results they would wish to learn from pharmacogenomic research. All stakeholders across the different participant categories (54) agreed that a community is a complex group of people with varying values, beliefs and interests. Therefore, stakeholders emphasized the need to first define the specific groups of people within the community that have potential to benefit from the genomic research study, and later, identify the key gatekeepers prior to accessing the potential beneficiaries. I think in this case , it would be important to first define a community based on the disease and subject at hand , if we want them [communities] to appreciate the benefits of research . Research targets a particular population and not the whole community , so we should be able to engage with people who have been infected and/ or affected by HIV/AIDS and see how these results can be returned them . (KII_Female_Researcher_5). Almost all of the stakeholders (50) agreed that before selecting the kinds of results from pharmacogenomic research, community members should be provided with adequate information about the nature of the results and their implications. One REC member emphasized that researchers and community representatives should provide generalized but appropriate and understandable information about the role of genes in the treatment of HIV/AIDS to minimize raising ethical issues in communities, such as stigmatization. We need to strike a balance regarding the information researchers communicate at community level and individual level . People living with HIV still suffer stigmatization , even among themselves . Just telling someone that your body cannot respond to first-line ARVs [Antiretroviral drugs] , you need to move to second line may make one feel stigmatized . So , I feel we should provide general information about pharmacogenomic research at community level and then get into the specifics at individual level . (IDI_Male_REC member_5). Another REC member cited concerns about balancing community and individual interests when selecting the kinds of results to be returned from pharmacogenomic analyses. Since community choices are made based on a majority decision, sometimes the voices of the minority may be ignored, yet individuals’ rights should be respected. He suggested that research institutions and regulators should develop a framework for building community consensus on the kind of results to be discussed at community level while considering the interests of the individuals participating in pharmacogenomic research. … .. As communities are pushing to be involved in the process of returning these results , we need a framework and guidance from regulators and RECs on how to balance participants’ interests at individual level and communities’ interests at community level . (KII_Male REC member #12) Several REC members (06) emphasized the need to seek consent from the family head and individual consent from each family member who wishes to learn about their pharmacogenomic research results. They emphasized the need to provide adequate information and time to the family members to understand the results’ implications fully, before soliciting their choices of the kinds of results they wish to receive. These family members should be first prepared by providing them with detailed information about genes and their role in their bodies just like how you have prepared the participant throughout the study . In fact , these people [family members] should first consent to either accept to know this information or not . Some family members may not want to know about these results . No one should decide for them , not even the participant’s decision to share his/ her results with family members . They [family members] should also be autonomous because the results will be affecting their lives as individuals… .. (KII_Male_REC member # 6) Opinions on how pharmacogenomic research information and results should be communicated at community and individual level In order to build community trust and promote ownership of these results, stakeholders suggested that a healthcare provider knowledgeable and experienced in genetics should provide the pharmacogenomic information and results to communities. Eight community representatives emphasized the need to provide genomic information in local languages that local communities can understand. The language researchers use to communicate this information matters . Researchers should consider translating this information into local languages and consult community representatives on the appropriate local words to be used when explaining genomic terms . (FGD 5_Male_Community representative 5). Majority of the researchers (10) emphasized the role of genetic counselors in the feedback process of pharmacogenomic research results. One researcher mentioned that genetic counselors should work with the research team to devise appropriate tools for communicating pharmacogenomic information and results, and guide on the strategies for engaging communities throughout the research project. Sharing this genetics information with community members can sometimes be tricky . Many of them [community members] are illiterate and do not know much about genes . Therefore , the research team needs to be very creative and probably work with genetic counselors who understand both the scientific information and societal norms . (KII_Female_Researcher_2) Another researcher mentioned that the research team usually trains peer clients to communicate pharmacogenomic research information to fellow participants. She hoped this strategy would enhance participants’ understanding of individual pharmacogenomic research results, since some participants may be more willing to open up to their peers than healthcare workers. I think we can also utilize our peer-clients to explain these results to their fellow participants . From the time we started these PG [pharmacogenomic research] studies , we have trained our expert patients how to explain and break down these terms . They even share with us some of the concerns that some participants tell them… . (KII_Female_Researcher_8) Some researchers (04) suggested using music, dance and drama to convey information about pharmacogenomic research and the implications of its results to participants. Another way we can communicate these results to participants and community members is utilizing the creativity of our drama club here at the Institute . They usually come up with skits and songs to communicate HIV-related information at least once in two weeks . So we can give them scripts to act out something about the role of genes in breaking down the ARVs [Antiretroviral drugs] . (KII_Male_Researcher_11) One researcher suggested using group education as another way of promoting participants’ understanding of pharmacogenomic research information including results. I would think about holding group discussions among the potential participants who may be attending the ART [Antiretroviral treatment] clinic on a given day . For example , I could plan a session on one Tuesday , the day when many patients come in , then I give them some information about the study and encourage them to ask questions . This way , some other potential participants can learn from others’ questions but can ask more questions when they choose to join the study (KII_Female_Researcher _2) One community representative emphasized the need to identify a family elder or an influential member of a particular family to communicate these results at the family level. If you want participants and families to own and utilize these results , you may need to identify a key person in that family , someone that is respected and can be listened to by most family members… . If one of their own can describe a given condition , while citing familiar stories , it will be easy for other family members to appreciate the results and own them… (FGD 1_Male_Community representative 5). Perceived roles of community stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results Stakeholders made several suggestions on what they perceived as the roles of the different stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results. They highlighted the contribution of participants, community representatives, researchers, research institutions and research regulators to enhancing community understanding of pharmacogenomic research results. Researchers asserted that adequate and relevant information about pharmacogenomic research should be clearly provided to research participants and communities throughout the study. However, one researcher emphasized the need to avoid redundant information that may not have meaning to research participants and the community members. Furthermore, they indicated that researchers ought to be knowledgeable and competent to interpret pharmacogenomic results, fully understand their implications and provide a well-explained action plan following the return of results to participants and their communities. Two researchers mentioned that it is their responsibility to share study findings with the national regulatory agencies and RECs to maximize utilization of research findings. They emphasized working together with the Ministry of Health to educate the public about the different roles of genes in the human body. “What we [researchers] should do is to understand the implications of these results first , before presenting them to the participants , or to their family members . This way , we shall be able to provide answers and appropriate solutions to participants and their family members . (KII_Male_Researcher_6) “We have the moral obligation to share all these findings with the Regulatory Authorities and ethics committees , whether primary results information or incidental findings . This information can help guide policy makers on what they may think is crucial to consider about pharmacogenomics in HIV treatment or other genomic studies . (KII_Male_Researcher_12) Community representatives indicated that its is their responsibility to guide the research team when assessing the potential social harm and other risks of returning individual pharmacogenomic research results to participants, their families, and community members. They mentioned that it was their responsibility to enhance community members’ understanding of research findings by explaining complex genomic terms using relatable and relevant life stories. They also felt that it was their responsibility to create awareness about pharmacogenomic research and the implications of research findings through sensitization and health campaigns. They argued that health campaigns would help to provide accurate information about pharmacogenomic research and minimize misinformation and misperceptions. … . You know genetics is a very sensitive topic and many communities have not appreciated these studies right now . There are many incorrect stories and myths about HIV drugs that come up because people respond differently to the ARVs [Antiretroviral drugs]… And some people take their ARVs [Antiretroviral drugs] along with some traditional herbs , which may also affect how their bodies react to the ARVs [Antiretroviral drugs]… so the community representatives should come out and provide accurate information to nullify the myths about how different human bodies respond to ARVs [Antiretroviral drugs]… . (FGD 5_Female_Community representative 1) REC members highlighted some of their responsibilities in promoting participants’ understanding of pharmacogenomic research results. They stated that it is their role to review research protocols and identify the possible ethical, social, and legal implications of sharing individual pharmacogenomic research results with research participants and their communities. They also pointed out the importance of ensuring that the informed consent documents have accurate information, are simple and written in a language that is easily understandable. They further suggested that researchers should put in place measures to protect participants and families from social harms that may be associated with the results feedback process. REC members also emphasized the need for researchers to submit study-specific dissemination plans for review and approval prior to the commencement of study activities. “ .. The REC’s primary role is to protect participants and their communities from research related harm… .. for example we need to ensure that the information given to them is simple , clear and easy to understand because the results can be misinterpreted and cause psychological harm to participants … (KII_Male_REC Participant #10) In addition, RECs should provide standardized protocol templates for genetic and genomic studies to guide researchers on the critical ethical aspects of genomic and genetics studies when developing research protocols. RECs have a role of reviewing and advising investigators how to design informed consent processes and how these results can be safely returned . We can achieve this by standardizing genetics protocol templates for genetics and genomic studies .. (KII_Male_REC Participant #12) Regarding the role of research participants and community members , stakeholders said that research participants and members of research communities should raise questions and concerns about unclear concepts for clarification during their interactions with research teams. They said that research participants have a role of selecting the kind of results they would wish to receive after full comprehension of the implications of their choices. We try our level best to build a good relationship with our participants so that they can freely and openly tell us their concerns and aspects they might not have understood . So we expect them to ask questions and also tell us truthful information during our discussions with them (KII_Female_Researcher_3) I think research participants have the responsibility to clearly inform the researchers whether they want to receive their results or not . They should also specify the kind of results they would like to receive . They should also let us know whether it is okay to share their results with family members or not (KII_Male_Researcher_10) Stakeholders asserted that research institutions should put in place measures to protect research participants and communities from possible social harm that may arise from the result feedback process. Research institutions should come up with structures and systems that protect participants and communities from possible harm . These institutions should employ full-time genetic counselors to address genetics related questions and concerns to both individual participants and communities , even when the research project has ended . (IDI_Female_REC member_6) Stakeholders also felt that research institutions should encourage continuous feedback from research participants and communities even after the study closure, for example using suggestion boxes at the facilities. Further, stakeholders indicated that institutions should engage and /or collaborate with health facilities offering genetic services for a smooth referral of participants and building new relationships at these facilities. One thing I would request research institutions to do , is to establish collaborations with other health facilities or NGOs that can provide extra support to our participants and their family members after we share these results . You may find that some participants need extra psychological support than just the counseling services we offer here [HIV/AIDS research clinics] (KII_Female_Researcher_5) Regarding the role of national research regulators for example the Ministry of Health, UNCST and NDA, stakeholders asserted that the regulators should amplify the need to provide individual results of genomic analyses particularly those that are of clinical significance. Stakeholders also suggested that the national research regulators should develop contextualized guidelines to facilitate a safe return of individual genomic and genetic research results to participants, family members, and research communities. We currently do not have national guidelines on returning genomic and genetics research results to participants and community members… . Therefore , it is sometimes hard for us REC members to advise researchers on some ethical aspects that may affect participants in any way . So our regulators need to develop these guidelines as soon as possible (IDI_Male_REC member_12) Perceived challenges of engaging communities when returning individual results to research participants Two researchers observed that it might be difficult to determine the most suitable CE approach when engaging community members. They advised that sometimes, the research team needs to be flexible to accommodate more than one approach. This may require a lot of time and financial resources to creatively adapt more than one approach when engaging and communicating pharmacogenomic related -information appropriately without losing scientific meaning. [… . ] pharmacogenomic research is already difficult to understand . So the team [research team members] need to be careful when translating this information in plays and songs not to lose the scientific meaning of this research [pharmacogenomic research]… . (KII_Female_Researcher_8) [… . ] Sometimes deciding on what language to use when speaking to a group of people is very hard . In Kampala , people speak more than three common languages so the researchers may even get confused on the most suitable language to choose over others . (FGD 1_Female_Community representative 4) One third of the community representatives raised concerns about diagnostic misconceptions of genetics and genomics studies. Right now , there is a lot of ‘noise’ about paternity testing in our communities . Even many labs (laboratories) are making adverts for people to go for paternity testing . Now , when you talk about anything concerning genes , many people’s minds run to testing their children’s paternity… . (FGD 5_Female_Community representative 1) One REC member said that prioritizing certain kinds of results over others could be challenging to communicate at community level compared to individual level. He felt that it is sometimes difficult to give an answer regarding why emphasis is placed on certain kinds of results while leaving out others. [ . … . ] I can see it being difficult to explain why researchers are concentrating on certain kinds of results and missing others [results] . They [participants] might think that other results are not important . And , you see , our people are shy . They will fear to ask questions in a group of people . (KII_Male REC member #12) Many researchers (08) were concerned about the absence of genetic counselors at their institutions who are skilled in communicating genomic/ genetic information while respecting the community values and beliefs. I worry about if we [researchers] can communicate these results to our participants effectively . We don’t have genetics counselors in my institution… . And I think we may ignore some ethical and social challenges that may affect our people . (KII_Male_Researcher_6) A summary of the key themes that emerged from the data collected as described in . I Benefits of engaging communities prior to returning individual pharmacogenomic research results II Obtaining community consensus on the kinds of pharmacogenomic results to be returned. III Opinions on how pharmacogenomic research information and results should be communicated at community and individual levels IV Perceived roles of community stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results V Perceived challenges of engaging communities when returning individual results to research participants Benefits of engaging communities prior to returning individual pharmacogenomic research results to participants Majority of the stakeholders (47) lauded CE for promoting understanding of individual pharmacogenomic results, but emphasized the need to engage community members right from proposal development. Stakeholders considered CE an opportunity to create platforms where participants and community members can freely interact with researchers throughout the research period. In addition, engaging communities was recognized for enabling researchers to learn about communities’ cultural values and local explanations of disease experiences. In return, participants and communities also learn about the individual variability of drug response, thus promoting adequate understanding of pharmacogenomic research and the implications of research results. The continuous interactions between researchers, participants and communities were also said to foster lasting relationships based on mutual respect and trust, and eliminate misconceptions about genomics research. [……]As long as you keep giving information to people about unclear things…they can easily trust you . This will help reduce on many community misunderstandings and misconceptions about genomic research . (KII_Male_Researcher_12) You see , you need to engage the communities early enough . When you do so , many will be given an opportunity to understand the goal of pharmacogenomic research and its implications even if they may not necessarily participate , but they will have learnt about these genetics hard concepts . So here , you see , these results do not necessarily help the participants alone but the whole community who have very many questions about this topic… (FGD 2_Male_Community representative 3) One community representative mentioned that community engagement can promote ownership of findings of pharmacogenomic research and foster community solidarity, where community members may contribute to the financial and social support for the less privileged individuals. When they receive the research information and results as a family or as community , they will be more willing to learn more about genes and drug interactions…and then take necessary action . I have even seen sometimes , people in communities come together to collect finances for their friends to go abroad for expensive medications which they [patients] cannot afford on their own . (FGD 1_Female_Community representative 4) Majority of the stakeholders across the different participant categories (38) acknowledged community engagement as an important strategy to overcome societal stigmatization of individuals or families who may not respond well to the available HIV treatment regimen provided by the Government of Uganda. Most of the time people in communities believe that individuals or families that are different from the majority of the population are cursed or are being punished by God . But when you discuss these [genetics] issues together with them [community members] as a group , many will appreciate the importance of having a genetically diverse population (KII_Male_Researcher_6). Obtaining community consensus on the kinds of pharmacogenomic results to be returned We asked stakeholders about the various ways of obtaining community consensus on the kinds of results they would wish to learn from pharmacogenomic research. All stakeholders across the different participant categories (54) agreed that a community is a complex group of people with varying values, beliefs and interests. Therefore, stakeholders emphasized the need to first define the specific groups of people within the community that have potential to benefit from the genomic research study, and later, identify the key gatekeepers prior to accessing the potential beneficiaries. I think in this case , it would be important to first define a community based on the disease and subject at hand , if we want them [communities] to appreciate the benefits of research . Research targets a particular population and not the whole community , so we should be able to engage with people who have been infected and/ or affected by HIV/AIDS and see how these results can be returned them . (KII_Female_Researcher_5). Almost all of the stakeholders (50) agreed that before selecting the kinds of results from pharmacogenomic research, community members should be provided with adequate information about the nature of the results and their implications. One REC member emphasized that researchers and community representatives should provide generalized but appropriate and understandable information about the role of genes in the treatment of HIV/AIDS to minimize raising ethical issues in communities, such as stigmatization. We need to strike a balance regarding the information researchers communicate at community level and individual level . People living with HIV still suffer stigmatization , even among themselves . Just telling someone that your body cannot respond to first-line ARVs [Antiretroviral drugs] , you need to move to second line may make one feel stigmatized . So , I feel we should provide general information about pharmacogenomic research at community level and then get into the specifics at individual level . (IDI_Male_REC member_5). Another REC member cited concerns about balancing community and individual interests when selecting the kinds of results to be returned from pharmacogenomic analyses. Since community choices are made based on a majority decision, sometimes the voices of the minority may be ignored, yet individuals’ rights should be respected. He suggested that research institutions and regulators should develop a framework for building community consensus on the kind of results to be discussed at community level while considering the interests of the individuals participating in pharmacogenomic research. … .. As communities are pushing to be involved in the process of returning these results , we need a framework and guidance from regulators and RECs on how to balance participants’ interests at individual level and communities’ interests at community level . (KII_Male REC member #12) Several REC members (06) emphasized the need to seek consent from the family head and individual consent from each family member who wishes to learn about their pharmacogenomic research results. They emphasized the need to provide adequate information and time to the family members to understand the results’ implications fully, before soliciting their choices of the kinds of results they wish to receive. These family members should be first prepared by providing them with detailed information about genes and their role in their bodies just like how you have prepared the participant throughout the study . In fact , these people [family members] should first consent to either accept to know this information or not . Some family members may not want to know about these results . No one should decide for them , not even the participant’s decision to share his/ her results with family members . They [family members] should also be autonomous because the results will be affecting their lives as individuals… .. (KII_Male_REC member # 6) Opinions on how pharmacogenomic research information and results should be communicated at community and individual level In order to build community trust and promote ownership of these results, stakeholders suggested that a healthcare provider knowledgeable and experienced in genetics should provide the pharmacogenomic information and results to communities. Eight community representatives emphasized the need to provide genomic information in local languages that local communities can understand. The language researchers use to communicate this information matters . Researchers should consider translating this information into local languages and consult community representatives on the appropriate local words to be used when explaining genomic terms . (FGD 5_Male_Community representative 5). Majority of the researchers (10) emphasized the role of genetic counselors in the feedback process of pharmacogenomic research results. One researcher mentioned that genetic counselors should work with the research team to devise appropriate tools for communicating pharmacogenomic information and results, and guide on the strategies for engaging communities throughout the research project. Sharing this genetics information with community members can sometimes be tricky . Many of them [community members] are illiterate and do not know much about genes . Therefore , the research team needs to be very creative and probably work with genetic counselors who understand both the scientific information and societal norms . (KII_Female_Researcher_2) Another researcher mentioned that the research team usually trains peer clients to communicate pharmacogenomic research information to fellow participants. She hoped this strategy would enhance participants’ understanding of individual pharmacogenomic research results, since some participants may be more willing to open up to their peers than healthcare workers. I think we can also utilize our peer-clients to explain these results to their fellow participants . From the time we started these PG [pharmacogenomic research] studies , we have trained our expert patients how to explain and break down these terms . They even share with us some of the concerns that some participants tell them… . (KII_Female_Researcher_8) Some researchers (04) suggested using music, dance and drama to convey information about pharmacogenomic research and the implications of its results to participants. Another way we can communicate these results to participants and community members is utilizing the creativity of our drama club here at the Institute . They usually come up with skits and songs to communicate HIV-related information at least once in two weeks . So we can give them scripts to act out something about the role of genes in breaking down the ARVs [Antiretroviral drugs] . (KII_Male_Researcher_11) One researcher suggested using group education as another way of promoting participants’ understanding of pharmacogenomic research information including results. I would think about holding group discussions among the potential participants who may be attending the ART [Antiretroviral treatment] clinic on a given day . For example , I could plan a session on one Tuesday , the day when many patients come in , then I give them some information about the study and encourage them to ask questions . This way , some other potential participants can learn from others’ questions but can ask more questions when they choose to join the study (KII_Female_Researcher _2) One community representative emphasized the need to identify a family elder or an influential member of a particular family to communicate these results at the family level. If you want participants and families to own and utilize these results , you may need to identify a key person in that family , someone that is respected and can be listened to by most family members… . If one of their own can describe a given condition , while citing familiar stories , it will be easy for other family members to appreciate the results and own them… (FGD 1_Male_Community representative 5). Perceived roles of community stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results Stakeholders made several suggestions on what they perceived as the roles of the different stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results. They highlighted the contribution of participants, community representatives, researchers, research institutions and research regulators to enhancing community understanding of pharmacogenomic research results. Researchers asserted that adequate and relevant information about pharmacogenomic research should be clearly provided to research participants and communities throughout the study. However, one researcher emphasized the need to avoid redundant information that may not have meaning to research participants and the community members. Furthermore, they indicated that researchers ought to be knowledgeable and competent to interpret pharmacogenomic results, fully understand their implications and provide a well-explained action plan following the return of results to participants and their communities. Two researchers mentioned that it is their responsibility to share study findings with the national regulatory agencies and RECs to maximize utilization of research findings. They emphasized working together with the Ministry of Health to educate the public about the different roles of genes in the human body. “What we [researchers] should do is to understand the implications of these results first , before presenting them to the participants , or to their family members . This way , we shall be able to provide answers and appropriate solutions to participants and their family members . (KII_Male_Researcher_6) “We have the moral obligation to share all these findings with the Regulatory Authorities and ethics committees , whether primary results information or incidental findings . This information can help guide policy makers on what they may think is crucial to consider about pharmacogenomics in HIV treatment or other genomic studies . (KII_Male_Researcher_12) Community representatives indicated that its is their responsibility to guide the research team when assessing the potential social harm and other risks of returning individual pharmacogenomic research results to participants, their families, and community members. They mentioned that it was their responsibility to enhance community members’ understanding of research findings by explaining complex genomic terms using relatable and relevant life stories. They also felt that it was their responsibility to create awareness about pharmacogenomic research and the implications of research findings through sensitization and health campaigns. They argued that health campaigns would help to provide accurate information about pharmacogenomic research and minimize misinformation and misperceptions. … . You know genetics is a very sensitive topic and many communities have not appreciated these studies right now . There are many incorrect stories and myths about HIV drugs that come up because people respond differently to the ARVs [Antiretroviral drugs]… And some people take their ARVs [Antiretroviral drugs] along with some traditional herbs , which may also affect how their bodies react to the ARVs [Antiretroviral drugs]… so the community representatives should come out and provide accurate information to nullify the myths about how different human bodies respond to ARVs [Antiretroviral drugs]… . (FGD 5_Female_Community representative 1) REC members highlighted some of their responsibilities in promoting participants’ understanding of pharmacogenomic research results. They stated that it is their role to review research protocols and identify the possible ethical, social, and legal implications of sharing individual pharmacogenomic research results with research participants and their communities. They also pointed out the importance of ensuring that the informed consent documents have accurate information, are simple and written in a language that is easily understandable. They further suggested that researchers should put in place measures to protect participants and families from social harms that may be associated with the results feedback process. REC members also emphasized the need for researchers to submit study-specific dissemination plans for review and approval prior to the commencement of study activities. “ .. The REC’s primary role is to protect participants and their communities from research related harm… .. for example we need to ensure that the information given to them is simple , clear and easy to understand because the results can be misinterpreted and cause psychological harm to participants … (KII_Male_REC Participant #10) In addition, RECs should provide standardized protocol templates for genetic and genomic studies to guide researchers on the critical ethical aspects of genomic and genetics studies when developing research protocols. RECs have a role of reviewing and advising investigators how to design informed consent processes and how these results can be safely returned . We can achieve this by standardizing genetics protocol templates for genetics and genomic studies .. (KII_Male_REC Participant #12) Regarding the role of research participants and community members , stakeholders said that research participants and members of research communities should raise questions and concerns about unclear concepts for clarification during their interactions with research teams. They said that research participants have a role of selecting the kind of results they would wish to receive after full comprehension of the implications of their choices. We try our level best to build a good relationship with our participants so that they can freely and openly tell us their concerns and aspects they might not have understood . So we expect them to ask questions and also tell us truthful information during our discussions with them (KII_Female_Researcher_3) I think research participants have the responsibility to clearly inform the researchers whether they want to receive their results or not . They should also specify the kind of results they would like to receive . They should also let us know whether it is okay to share their results with family members or not (KII_Male_Researcher_10) Stakeholders asserted that research institutions should put in place measures to protect research participants and communities from possible social harm that may arise from the result feedback process. Research institutions should come up with structures and systems that protect participants and communities from possible harm . These institutions should employ full-time genetic counselors to address genetics related questions and concerns to both individual participants and communities , even when the research project has ended . (IDI_Female_REC member_6) Stakeholders also felt that research institutions should encourage continuous feedback from research participants and communities even after the study closure, for example using suggestion boxes at the facilities. Further, stakeholders indicated that institutions should engage and /or collaborate with health facilities offering genetic services for a smooth referral of participants and building new relationships at these facilities. One thing I would request research institutions to do , is to establish collaborations with other health facilities or NGOs that can provide extra support to our participants and their family members after we share these results . You may find that some participants need extra psychological support than just the counseling services we offer here [HIV/AIDS research clinics] (KII_Female_Researcher_5) Regarding the role of national research regulators for example the Ministry of Health, UNCST and NDA, stakeholders asserted that the regulators should amplify the need to provide individual results of genomic analyses particularly those that are of clinical significance. Stakeholders also suggested that the national research regulators should develop contextualized guidelines to facilitate a safe return of individual genomic and genetic research results to participants, family members, and research communities. We currently do not have national guidelines on returning genomic and genetics research results to participants and community members… . Therefore , it is sometimes hard for us REC members to advise researchers on some ethical aspects that may affect participants in any way . So our regulators need to develop these guidelines as soon as possible (IDI_Male_REC member_12) Perceived challenges of engaging communities when returning individual results to research participants Two researchers observed that it might be difficult to determine the most suitable CE approach when engaging community members. They advised that sometimes, the research team needs to be flexible to accommodate more than one approach. This may require a lot of time and financial resources to creatively adapt more than one approach when engaging and communicating pharmacogenomic related -information appropriately without losing scientific meaning. [… . ] pharmacogenomic research is already difficult to understand . So the team [research team members] need to be careful when translating this information in plays and songs not to lose the scientific meaning of this research [pharmacogenomic research]… . (KII_Female_Researcher_8) [… . ] Sometimes deciding on what language to use when speaking to a group of people is very hard . In Kampala , people speak more than three common languages so the researchers may even get confused on the most suitable language to choose over others . (FGD 1_Female_Community representative 4) One third of the community representatives raised concerns about diagnostic misconceptions of genetics and genomics studies. Right now , there is a lot of ‘noise’ about paternity testing in our communities . Even many labs (laboratories) are making adverts for people to go for paternity testing . Now , when you talk about anything concerning genes , many people’s minds run to testing their children’s paternity… . (FGD 5_Female_Community representative 1) One REC member said that prioritizing certain kinds of results over others could be challenging to communicate at community level compared to individual level. He felt that it is sometimes difficult to give an answer regarding why emphasis is placed on certain kinds of results while leaving out others. [ . … . ] I can see it being difficult to explain why researchers are concentrating on certain kinds of results and missing others [results] . They [participants] might think that other results are not important . And , you see , our people are shy . They will fear to ask questions in a group of people . (KII_Male REC member #12) Many researchers (08) were concerned about the absence of genetic counselors at their institutions who are skilled in communicating genomic/ genetic information while respecting the community values and beliefs. I worry about if we [researchers] can communicate these results to our participants effectively . We don’t have genetics counselors in my institution… . And I think we may ignore some ethical and social challenges that may affect our people . (KII_Male_Researcher_6) Majority of the stakeholders (47) lauded CE for promoting understanding of individual pharmacogenomic results, but emphasized the need to engage community members right from proposal development. Stakeholders considered CE an opportunity to create platforms where participants and community members can freely interact with researchers throughout the research period. In addition, engaging communities was recognized for enabling researchers to learn about communities’ cultural values and local explanations of disease experiences. In return, participants and communities also learn about the individual variability of drug response, thus promoting adequate understanding of pharmacogenomic research and the implications of research results. The continuous interactions between researchers, participants and communities were also said to foster lasting relationships based on mutual respect and trust, and eliminate misconceptions about genomics research. [……]As long as you keep giving information to people about unclear things…they can easily trust you . This will help reduce on many community misunderstandings and misconceptions about genomic research . (KII_Male_Researcher_12) You see , you need to engage the communities early enough . When you do so , many will be given an opportunity to understand the goal of pharmacogenomic research and its implications even if they may not necessarily participate , but they will have learnt about these genetics hard concepts . So here , you see , these results do not necessarily help the participants alone but the whole community who have very many questions about this topic… (FGD 2_Male_Community representative 3) One community representative mentioned that community engagement can promote ownership of findings of pharmacogenomic research and foster community solidarity, where community members may contribute to the financial and social support for the less privileged individuals. When they receive the research information and results as a family or as community , they will be more willing to learn more about genes and drug interactions…and then take necessary action . I have even seen sometimes , people in communities come together to collect finances for their friends to go abroad for expensive medications which they [patients] cannot afford on their own . (FGD 1_Female_Community representative 4) Majority of the stakeholders across the different participant categories (38) acknowledged community engagement as an important strategy to overcome societal stigmatization of individuals or families who may not respond well to the available HIV treatment regimen provided by the Government of Uganda. Most of the time people in communities believe that individuals or families that are different from the majority of the population are cursed or are being punished by God . But when you discuss these [genetics] issues together with them [community members] as a group , many will appreciate the importance of having a genetically diverse population (KII_Male_Researcher_6). We asked stakeholders about the various ways of obtaining community consensus on the kinds of results they would wish to learn from pharmacogenomic research. All stakeholders across the different participant categories (54) agreed that a community is a complex group of people with varying values, beliefs and interests. Therefore, stakeholders emphasized the need to first define the specific groups of people within the community that have potential to benefit from the genomic research study, and later, identify the key gatekeepers prior to accessing the potential beneficiaries. I think in this case , it would be important to first define a community based on the disease and subject at hand , if we want them [communities] to appreciate the benefits of research . Research targets a particular population and not the whole community , so we should be able to engage with people who have been infected and/ or affected by HIV/AIDS and see how these results can be returned them . (KII_Female_Researcher_5). Almost all of the stakeholders (50) agreed that before selecting the kinds of results from pharmacogenomic research, community members should be provided with adequate information about the nature of the results and their implications. One REC member emphasized that researchers and community representatives should provide generalized but appropriate and understandable information about the role of genes in the treatment of HIV/AIDS to minimize raising ethical issues in communities, such as stigmatization. We need to strike a balance regarding the information researchers communicate at community level and individual level . People living with HIV still suffer stigmatization , even among themselves . Just telling someone that your body cannot respond to first-line ARVs [Antiretroviral drugs] , you need to move to second line may make one feel stigmatized . So , I feel we should provide general information about pharmacogenomic research at community level and then get into the specifics at individual level . (IDI_Male_REC member_5). Another REC member cited concerns about balancing community and individual interests when selecting the kinds of results to be returned from pharmacogenomic analyses. Since community choices are made based on a majority decision, sometimes the voices of the minority may be ignored, yet individuals’ rights should be respected. He suggested that research institutions and regulators should develop a framework for building community consensus on the kind of results to be discussed at community level while considering the interests of the individuals participating in pharmacogenomic research. … .. As communities are pushing to be involved in the process of returning these results , we need a framework and guidance from regulators and RECs on how to balance participants’ interests at individual level and communities’ interests at community level . (KII_Male REC member #12) Several REC members (06) emphasized the need to seek consent from the family head and individual consent from each family member who wishes to learn about their pharmacogenomic research results. They emphasized the need to provide adequate information and time to the family members to understand the results’ implications fully, before soliciting their choices of the kinds of results they wish to receive. These family members should be first prepared by providing them with detailed information about genes and their role in their bodies just like how you have prepared the participant throughout the study . In fact , these people [family members] should first consent to either accept to know this information or not . Some family members may not want to know about these results . No one should decide for them , not even the participant’s decision to share his/ her results with family members . They [family members] should also be autonomous because the results will be affecting their lives as individuals… .. (KII_Male_REC member # 6) In order to build community trust and promote ownership of these results, stakeholders suggested that a healthcare provider knowledgeable and experienced in genetics should provide the pharmacogenomic information and results to communities. Eight community representatives emphasized the need to provide genomic information in local languages that local communities can understand. The language researchers use to communicate this information matters . Researchers should consider translating this information into local languages and consult community representatives on the appropriate local words to be used when explaining genomic terms . (FGD 5_Male_Community representative 5). Majority of the researchers (10) emphasized the role of genetic counselors in the feedback process of pharmacogenomic research results. One researcher mentioned that genetic counselors should work with the research team to devise appropriate tools for communicating pharmacogenomic information and results, and guide on the strategies for engaging communities throughout the research project. Sharing this genetics information with community members can sometimes be tricky . Many of them [community members] are illiterate and do not know much about genes . Therefore , the research team needs to be very creative and probably work with genetic counselors who understand both the scientific information and societal norms . (KII_Female_Researcher_2) Another researcher mentioned that the research team usually trains peer clients to communicate pharmacogenomic research information to fellow participants. She hoped this strategy would enhance participants’ understanding of individual pharmacogenomic research results, since some participants may be more willing to open up to their peers than healthcare workers. I think we can also utilize our peer-clients to explain these results to their fellow participants . From the time we started these PG [pharmacogenomic research] studies , we have trained our expert patients how to explain and break down these terms . They even share with us some of the concerns that some participants tell them… . (KII_Female_Researcher_8) Some researchers (04) suggested using music, dance and drama to convey information about pharmacogenomic research and the implications of its results to participants. Another way we can communicate these results to participants and community members is utilizing the creativity of our drama club here at the Institute . They usually come up with skits and songs to communicate HIV-related information at least once in two weeks . So we can give them scripts to act out something about the role of genes in breaking down the ARVs [Antiretroviral drugs] . (KII_Male_Researcher_11) One researcher suggested using group education as another way of promoting participants’ understanding of pharmacogenomic research information including results. I would think about holding group discussions among the potential participants who may be attending the ART [Antiretroviral treatment] clinic on a given day . For example , I could plan a session on one Tuesday , the day when many patients come in , then I give them some information about the study and encourage them to ask questions . This way , some other potential participants can learn from others’ questions but can ask more questions when they choose to join the study (KII_Female_Researcher _2) One community representative emphasized the need to identify a family elder or an influential member of a particular family to communicate these results at the family level. If you want participants and families to own and utilize these results , you may need to identify a key person in that family , someone that is respected and can be listened to by most family members… . If one of their own can describe a given condition , while citing familiar stories , it will be easy for other family members to appreciate the results and own them… (FGD 1_Male_Community representative 5). Stakeholders made several suggestions on what they perceived as the roles of the different stakeholders in promoting participants’ understanding and utilization of pharmacogenomic research results. They highlighted the contribution of participants, community representatives, researchers, research institutions and research regulators to enhancing community understanding of pharmacogenomic research results. Researchers asserted that adequate and relevant information about pharmacogenomic research should be clearly provided to research participants and communities throughout the study. However, one researcher emphasized the need to avoid redundant information that may not have meaning to research participants and the community members. Furthermore, they indicated that researchers ought to be knowledgeable and competent to interpret pharmacogenomic results, fully understand their implications and provide a well-explained action plan following the return of results to participants and their communities. Two researchers mentioned that it is their responsibility to share study findings with the national regulatory agencies and RECs to maximize utilization of research findings. They emphasized working together with the Ministry of Health to educate the public about the different roles of genes in the human body. “What we [researchers] should do is to understand the implications of these results first , before presenting them to the participants , or to their family members . This way , we shall be able to provide answers and appropriate solutions to participants and their family members . (KII_Male_Researcher_6) “We have the moral obligation to share all these findings with the Regulatory Authorities and ethics committees , whether primary results information or incidental findings . This information can help guide policy makers on what they may think is crucial to consider about pharmacogenomics in HIV treatment or other genomic studies . (KII_Male_Researcher_12) Community representatives indicated that its is their responsibility to guide the research team when assessing the potential social harm and other risks of returning individual pharmacogenomic research results to participants, their families, and community members. They mentioned that it was their responsibility to enhance community members’ understanding of research findings by explaining complex genomic terms using relatable and relevant life stories. They also felt that it was their responsibility to create awareness about pharmacogenomic research and the implications of research findings through sensitization and health campaigns. They argued that health campaigns would help to provide accurate information about pharmacogenomic research and minimize misinformation and misperceptions. … . You know genetics is a very sensitive topic and many communities have not appreciated these studies right now . There are many incorrect stories and myths about HIV drugs that come up because people respond differently to the ARVs [Antiretroviral drugs]… And some people take their ARVs [Antiretroviral drugs] along with some traditional herbs , which may also affect how their bodies react to the ARVs [Antiretroviral drugs]… so the community representatives should come out and provide accurate information to nullify the myths about how different human bodies respond to ARVs [Antiretroviral drugs]… . (FGD 5_Female_Community representative 1) REC members highlighted some of their responsibilities in promoting participants’ understanding of pharmacogenomic research results. They stated that it is their role to review research protocols and identify the possible ethical, social, and legal implications of sharing individual pharmacogenomic research results with research participants and their communities. They also pointed out the importance of ensuring that the informed consent documents have accurate information, are simple and written in a language that is easily understandable. They further suggested that researchers should put in place measures to protect participants and families from social harms that may be associated with the results feedback process. REC members also emphasized the need for researchers to submit study-specific dissemination plans for review and approval prior to the commencement of study activities. “ .. The REC’s primary role is to protect participants and their communities from research related harm… .. for example we need to ensure that the information given to them is simple , clear and easy to understand because the results can be misinterpreted and cause psychological harm to participants … (KII_Male_REC Participant #10) In addition, RECs should provide standardized protocol templates for genetic and genomic studies to guide researchers on the critical ethical aspects of genomic and genetics studies when developing research protocols. RECs have a role of reviewing and advising investigators how to design informed consent processes and how these results can be safely returned . We can achieve this by standardizing genetics protocol templates for genetics and genomic studies .. (KII_Male_REC Participant #12) Regarding the role of research participants and community members , stakeholders said that research participants and members of research communities should raise questions and concerns about unclear concepts for clarification during their interactions with research teams. They said that research participants have a role of selecting the kind of results they would wish to receive after full comprehension of the implications of their choices. We try our level best to build a good relationship with our participants so that they can freely and openly tell us their concerns and aspects they might not have understood . So we expect them to ask questions and also tell us truthful information during our discussions with them (KII_Female_Researcher_3) I think research participants have the responsibility to clearly inform the researchers whether they want to receive their results or not . They should also specify the kind of results they would like to receive . They should also let us know whether it is okay to share their results with family members or not (KII_Male_Researcher_10) Stakeholders asserted that research institutions should put in place measures to protect research participants and communities from possible social harm that may arise from the result feedback process. Research institutions should come up with structures and systems that protect participants and communities from possible harm . These institutions should employ full-time genetic counselors to address genetics related questions and concerns to both individual participants and communities , even when the research project has ended . (IDI_Female_REC member_6) Stakeholders also felt that research institutions should encourage continuous feedback from research participants and communities even after the study closure, for example using suggestion boxes at the facilities. Further, stakeholders indicated that institutions should engage and /or collaborate with health facilities offering genetic services for a smooth referral of participants and building new relationships at these facilities. One thing I would request research institutions to do , is to establish collaborations with other health facilities or NGOs that can provide extra support to our participants and their family members after we share these results . You may find that some participants need extra psychological support than just the counseling services we offer here [HIV/AIDS research clinics] (KII_Female_Researcher_5) Regarding the role of national research regulators for example the Ministry of Health, UNCST and NDA, stakeholders asserted that the regulators should amplify the need to provide individual results of genomic analyses particularly those that are of clinical significance. Stakeholders also suggested that the national research regulators should develop contextualized guidelines to facilitate a safe return of individual genomic and genetic research results to participants, family members, and research communities. We currently do not have national guidelines on returning genomic and genetics research results to participants and community members… . Therefore , it is sometimes hard for us REC members to advise researchers on some ethical aspects that may affect participants in any way . So our regulators need to develop these guidelines as soon as possible (IDI_Male_REC member_12) Two researchers observed that it might be difficult to determine the most suitable CE approach when engaging community members. They advised that sometimes, the research team needs to be flexible to accommodate more than one approach. This may require a lot of time and financial resources to creatively adapt more than one approach when engaging and communicating pharmacogenomic related -information appropriately without losing scientific meaning. [… . ] pharmacogenomic research is already difficult to understand . So the team [research team members] need to be careful when translating this information in plays and songs not to lose the scientific meaning of this research [pharmacogenomic research]… . (KII_Female_Researcher_8) [… . ] Sometimes deciding on what language to use when speaking to a group of people is very hard . In Kampala , people speak more than three common languages so the researchers may even get confused on the most suitable language to choose over others . (FGD 1_Female_Community representative 4) One third of the community representatives raised concerns about diagnostic misconceptions of genetics and genomics studies. Right now , there is a lot of ‘noise’ about paternity testing in our communities . Even many labs (laboratories) are making adverts for people to go for paternity testing . Now , when you talk about anything concerning genes , many people’s minds run to testing their children’s paternity… . (FGD 5_Female_Community representative 1) One REC member said that prioritizing certain kinds of results over others could be challenging to communicate at community level compared to individual level. He felt that it is sometimes difficult to give an answer regarding why emphasis is placed on certain kinds of results while leaving out others. [ . … . ] I can see it being difficult to explain why researchers are concentrating on certain kinds of results and missing others [results] . They [participants] might think that other results are not important . And , you see , our people are shy . They will fear to ask questions in a group of people . (KII_Male REC member #12) Many researchers (08) were concerned about the absence of genetic counselors at their institutions who are skilled in communicating genomic/ genetic information while respecting the community values and beliefs. I worry about if we [researchers] can communicate these results to our participants effectively . We don’t have genetics counselors in my institution… . And I think we may ignore some ethical and social challenges that may affect our people . (KII_Male_Researcher_6) This study explored the role of community engagement in promoting understanding of individual pharmacogenomic research results among PLHIV. Pharmacogenomic research is paving way for the future of personalized medicine , where tailor-made treatment strategies are defined for groups of individuals based on their genetic make-up. Results from pharmacogenomic research may be used to guide clinicians and researchers’ decisions on the appropriate drug and drug dosage required to produce a desirable effect for an individual or groups of people based on their genetic makeup . Similarly, returning these results to PLHIV could help them understand why some people respond to the same drugs differently from others. Participants’ understanding of their genetic make-up encourages adherence to the prescribed treatment, especially among PLHIV with long treatment periods . However, it is imperative that PLHIV attain full understanding of the ethical, legal and social implications before receiving individual results of pharmacogenomic research. This is because these results can lead to social and psychological harm to participants and their family members . Studies have reported the complex nature of genomic terms that are difficult to understand even among the literate populations, low literacy levels in many resource limited settings, and community misconception of linking participation and receiving results from genomic and genetic results to establishing the paternity of children and other family members as contributing factors to participants’ inadequate understanding of pharmacogenomic and genomic information . To overcome these challenges, it is important that communities likely to benefit from a genetic or genomic study are actively engaged throughout the study. Respondents agreed that determining a culturally acceptable approach or approaches is an essential step to achieve effective CE. Research teams should select an approach that is creative, flexible and sensitive to participants’ values, beliefs and education levels. Their views are consistent with a study that explored perspectives on returning individual and aggregate genomic research results to study participants and communities in Kenya . Researchers suggested using music, dance and drama as an approach that could enhance communication of understandable information about pharmacogenomic research and results at community level. A study conducted in South Africa used the drama of DNA approach to engage communities reported that drama can be a relatively effective approach in engaging community members when conveying information about ethical and social challenges related to the return of individual genetic research results . Music and drama has been used to convey information related to HIV prevention, and adherence to ARV treatment in Uganda and many parts of sub-Saharan Africa . Therefore, research teams might adapt the music and drama approach to communicate understandable information about the implications of findings from pharmacogenomic research since communities of PLHIV are familiar with. However, caution is necessary during the development of music and drama scripts and translation of research information from English to local languages. This is to avoid losing the scientific meaning of how genes interact with medicines. Therefore, the research teams should work together with drama teams in the development of drama scripts to ensure communication of accurate information in a simple manner that is understandable by community members. Respect for individual’s privacy and confidentiality should be paramount when communicating information about pharmacogenomic research results through music and drama at community level. Upholding participants’ privacy and confidentiality when communicating results from genomic and pharmacogenomic research prevents risks of stigmatizing participants and their families, discrimination, and lack of interest in participating in future genomic and genetic research . In order to maintain participants’ privacy and confidentitality, stakeholders mentioned that relatively general but understandable information about pharmacogenomic research results should be provided at the community level, while providing individualized information about the results to participants during one-on-one discussions with the participants. Researchers also suggested training peer clients to provide additional explanation to participants on how an individual’s genes interact with drugs in a layman’s language. Empowering community members as vessels to explain genomic and genetics to their fellow peers might encourage free and open sharing of feelings about these results. Trained peer-clients also provide social support to fellow peers to overcome fears and misconceptions about findings from genomic and pharmacogenomic research, hence promoting participants’ understanding and ownership of the results. Group education is also another approach that was suggested by researchers to promote participants’ understanding. This approach might provide an opportunity for potential participants to ask questions based on the background information given during the group discussions thus enabling their fellow peers to learn from the explanations provided by the researchers. It is important to note that participants who might not be comfortable raising certain questions during the group discussion are still able to ask their questions during their individual meetings with the research teams. Respondents also suggested the need to engage genetic counselors when determining an appropriate communication method of returning these results to communities. Currently, there are few genetic counselors in some sub-Saharan African countries while others do not have any genetic cousellor, yet genomic research in rapidly increasing in Sub-Saharan Africa . Genetic counselors are skilled professionals in providing scientific information about genomic and genetics and social support to participants involved in genomic research. Therefore, research institutions should develop capacity of genetic counselors to support the process of returning individual pharmacogenomic research results. Respondents also suggested that research insitutions should hold regular and ongoing discussions about genomic and genetic research. The institutions may develop a consistent schedule for discussions about genomic research as an opportunity for participants to appreciate the relevance of genes in an individual’s body and implications of receiving individual pharmacogenomic research results. However, some research institutions in developing countries might face challenges with limited funding to achieve effective community engagement. Therefore, research institutions may solicit financial support from the government and non-government agencies to achieve effective community engagement. Lastly, key stakeholders for example researchers, REC members, and community representatives involved in pharmacogenomic research have a role in ensuring that participants adequately understand the implication of genomic and pharmacogenomic research results before they are returned to them. Researchers should report study findings to the national research regulators and policy-makers to jointly develop appropriate strategies of sensitizing communities about the various roles of genes in the human body, thus promoting participants’ understanding of geneomic and pharmacogenomic research. Researchers, together with other stakeholders should protect participants and research communities from possible harm that might arise from returning individual pharmacogenomic research results to participants. In addition, researchers should sensitize communities about the various functions of genes in an individual’s body. This may help community members to overcome misconceptions about of receiving results from genomic and pharmacogenomic research studies. Similarly, community representatives should also encourage sharing of correct information about genomic research to demistify the existing diagnostic misconceptions in communities. Community representatives and research participants should raise concerns or questions on unclear information about genomic and pharmacogenomic research and the implications of the results on their lives. Research regulators should develop guidelines and frameworks that facilitate adequate understanding of genomic and pharmacogenomic results at the individual and community levels. Our study had some limititations. Some interviews were conducted virtually via Zoom due to the COVID-19 pandemic, whose mitigation measures restricted face-to-face interactions. Thus, the authors were not able to capture non-verbal communication for some respondents. Sometimes, the research team experienced challenges with network connectivity. However, zoom interviews were substituted with telephone calls and respondents were encouraged to share additional information via email after the interviews. Our findings show that there is a consensus among the different stakeholders that CE can play a vital role in promoting research participants’ understanding of individual pharmacogenomic research results. Respondents mentioned several CE approaches including adapting existing music, dance and drama clubs, group education and training peer clients to communicate understandable information about pharmacogenomic research and the implications of its results to research communities. However, these approaches should comply to the ethical standards of conducting research such as respect of participants’ privacy and confidentiality. We recommend further research to explore the feasibility of using the existing CE approaches to communicate simple and understandable information and the implications of the results to research participants at community level. Of concern, respondents emphasized the need to engage genetic counselors when determining the suitable approach or approaches to achieve meaningful community engagement, yet many research instititutions conducting genomic research do not have genetic counselors. We recommend building capacity for genetic counselors in Uganda and other sub-Saharan countries where genomic and genetics research is conducted. We also recommend developing a framework that respects individuals and community interests, values and literacy levels when communicating the pharmacogenomic research information and results at individual and community levels. S1 Dataset (DOCX) S2 Dataset (DOCX) S3 Dataset (DOCX) |
Current challenges and practical aspects of molecular pathology for bone and soft tissue tumors | c73952c5-89ee-4dfb-b7db-43197bc017ce | 10948576 | Pathology[mh] | Brief description of molecular pathology and its significance in oncology Mesenchymal neoplasms constitute a large group of tumor entities characterized by their taxonomic complexity and complex management. The publication of the 5th edition of the WHO classification of bone and soft tissue tumors in 2020 and subsequent articles between 2020 and 2023 reflects the enormous progress in our knowledge of these tumors . The updating of taxonomic classifications, the redefinition of diagnostic criteria, the invention of new molecular diagnostic techniques, the development of prognostic indices, and the design of therapeutic targets correspond to the most critical innovations in translational research in the molecular pathology of mesenchymal neoplasms and, by extension, in the fight against cancer. Therefore, integrating molecular pathology in managing these tumors is a critical tool that links the development of scientific knowledge with diagnostic and therapeutic improvements in managing patients with mesenchymal neoplasms. Contextualization of soft tissue and bone tumors in molecular pathology Molecular pathology has revolutionized our understanding of tumor biology to the point of exponentially increasing our knowledge of the natural history of neoplasms. Although each type of mesenchymal neoplasm usually presents several molecular alterations, some of them carry a greater weight in tumor biology. Basically, we can differentiate three groups of mesenchymal neoplasms based on their molecular pathology (Table ). The first group refers to the group of tumors that do not present a specific molecular alteration of clinical interest; i.e., there is no alteration sufficiently relevant for it to be considered of diagnostic and/or therapeutic importance (this is the case, for example, of undifferentiated pleomorphic sarcoma). In the second group, we would find neoplasms with recurrent molecular alterations, among which we can highlight gene fusions, point mutations, deletions, and amplifications (i.e., Ewing sarcoma). Finally, we would have a group of neoplasms with complex karyotypes. These can occur de novo or, more frequently, due to the degeneration of previously existing neoplasms with more favorable characteristics (i.e., malignant peripheral nerve sheath tumor). The presence of a complex karyotype is believed to originate in the loss of tumor suppressor genes of enormous importance, such as RB1 , NF1 , and P53 , and in the phenomena of chromoanagenesis . The mortality rate for high-grade metastatic sarcomas remains very high . Sarcomas are highly heterogeneous morphologically, genetically, and in their behavior, so in addition to chemotherapy, which has a limited role in disease control, new strategies are needed for their treatment. In this sense, applying precision medicine strategies, which must start from a more precise diagnosis, is of extraordinary interest in such a heterogeneous group of tumors. Mesenchymal neoplasms constitute a large group of tumor entities characterized by their taxonomic complexity and complex management. The publication of the 5th edition of the WHO classification of bone and soft tissue tumors in 2020 and subsequent articles between 2020 and 2023 reflects the enormous progress in our knowledge of these tumors . The updating of taxonomic classifications, the redefinition of diagnostic criteria, the invention of new molecular diagnostic techniques, the development of prognostic indices, and the design of therapeutic targets correspond to the most critical innovations in translational research in the molecular pathology of mesenchymal neoplasms and, by extension, in the fight against cancer. Therefore, integrating molecular pathology in managing these tumors is a critical tool that links the development of scientific knowledge with diagnostic and therapeutic improvements in managing patients with mesenchymal neoplasms. Molecular pathology has revolutionized our understanding of tumor biology to the point of exponentially increasing our knowledge of the natural history of neoplasms. Although each type of mesenchymal neoplasm usually presents several molecular alterations, some of them carry a greater weight in tumor biology. Basically, we can differentiate three groups of mesenchymal neoplasms based on their molecular pathology (Table ). The first group refers to the group of tumors that do not present a specific molecular alteration of clinical interest; i.e., there is no alteration sufficiently relevant for it to be considered of diagnostic and/or therapeutic importance (this is the case, for example, of undifferentiated pleomorphic sarcoma). In the second group, we would find neoplasms with recurrent molecular alterations, among which we can highlight gene fusions, point mutations, deletions, and amplifications (i.e., Ewing sarcoma). Finally, we would have a group of neoplasms with complex karyotypes. These can occur de novo or, more frequently, due to the degeneration of previously existing neoplasms with more favorable characteristics (i.e., malignant peripheral nerve sheath tumor). The presence of a complex karyotype is believed to originate in the loss of tumor suppressor genes of enormous importance, such as RB1 , NF1 , and P53 , and in the phenomena of chromoanagenesis . The mortality rate for high-grade metastatic sarcomas remains very high . Sarcomas are highly heterogeneous morphologically, genetically, and in their behavior, so in addition to chemotherapy, which has a limited role in disease control, new strategies are needed for their treatment. In this sense, applying precision medicine strategies, which must start from a more precise diagnosis, is of extraordinary interest in such a heterogeneous group of tumors. The pivotal role of diagnostic biomarkers in molecular pathology of soft tissue tumors Soft tissue tumors represent a diverse group of neoplasms with complex molecular underpinnings. Advances in molecular pathology have revolutionized our understanding and classification of these tumors, primarily due to the discovery and understanding of specific diagnostic biomarkers (see Table ). Lipogenic neoplasms : Atypical pleomorphic and spindle-cell lipomatous tumors, previously variants of well-differentiated liposarcomas, are now distinctively identified by the absence of MDM2 and CDK4 gene amplification and, notably, the deletion of 13q14 and loss of the RB1 gene in many cases. Fibroblastic and myofibroblastic neoplasms: The presence of the NCOA2 gene rearrangement, leading to AHRR::NCOA2 fusion in most angiofibroma of soft tissues , pinpoints its molecular pathology. Another critical discovery is the EWSR1::SMAD3 fusion in EWSR1 -positive fibroblastic tumors , providing clear diagnostic criteria. Fibrohistiocytic neoplasms: Molecular insights have led to a shift in classification. For instance, the once-termed malignant fibrous histiocytomas have been divided into multiple distinct entities. Vascular neoplasms: The identification of GNAQ or GNA14 gene mutations in anastomosing hemangioma and the discovery of two main fusion types in epithelioid hemangioendothelioma, WWTR1::CAMTA1 and YAP1::TFE3 , have significantly advanced our diagnostic accuracy. Smooth muscle neoplasms: The expression of the viral EBER RNA in Epstein-Barr virus–positive smooth muscle tumors , leading to MYC overexpression, underlines its diagnostic significance. Additionally, in most cases, inflammatory leiomyosarcomas showcase a near-haploid karyotype, further refining our diagnostic approach. Striated muscle neoplasms: Newly identified fusions like TFCP2 with FUS or EWSR1 and MEIS1::NCOA2 in rhabdomyosarcomas have shifted our understanding of their origins and aggressiveness. Osteochondrogenic neoplasms: For instance, soft tissue chondromas are now known to harbor FN1::FGFR gene fusions in up to 50% of cases . Neoplasms of the nerve sheath: The malignant melanotic tumor of the nerve sheath is linked to Carney’s complex in a significant proportion of cases, underpinned by the loss of the tumor suppressor gene PRKAR1A . Other soft tissue neoplasms: The discovery of NTRK gene rearrangements (reviewed in 6) has been groundbreaking due to their therapeutic implications. The presence of these rearrangements is a pivotal diagnostic criterion for certain tumors (Fig. ). Undifferentiated round cell sarcomas : The emergence of advanced high-throughput methodologies has significantly reshaped our understanding and categorization of small round cell sarcomas (SRCSs). This evolution, fueled by the integration of extensive genetic, epigenetic, and transcriptomic insights along with progressive clinicopathological data and experimental frameworks, culminated in the establishment of a novel chapter dedicated to “undifferentiated SRCSs of bone and soft tissue” in the 2020 WHO classification for soft tissue and bone tumors. As these technologies evolve, they are expected to uncover even more uncommon SRCS variants. Predominantly, the most common fusion-driven entities that resemble Ewing sarcoma in morphology include round cell sarcomas characterized by the fusion of EWSR1 or FUS with non-ETS family genes (notably EWSR1::NFATC2 , FUS::NFATC2 , and EWSR1::PATZ1 ), sarcomas with CIC rearrangements (primarily CIC::DUX4 ), and sarcomas exhibiting BCOR genetic changes (chiefly BCOR::CCNB3 ). The consequences of these fusions on intracellular signaling pathways emphasize the shift in our understanding of tumor biology based on molecular findings . After the 5th edition of the WHO classification (2020–2023), the realm of soft tissue tumor molecular pathology witnessed further revelations: Fibrogenic neoplasms: Sarcomas with KMT2A gene rearrangements and fibroblastic tumors with PRRX1::NCOA1 fusion have enriched the spectrum of soft tissue tumors . Fibrohistiocytic neoplasms: The distinction of soft tissue giant cell tumors from their bony counterparts based on the absence of H3-3/H3F3 gene mutations is another testament to the precision offered by molecular biomarkers . Striated muscle neoplasms: The inflammatory rhabdomyoblastic tumor, a newly recognized entity, embodies the influence of molecular pathology in refining our understanding of soft tissue tumor classification . Other soft tissue neoplasms: The identification of EWSR1::SSX2 fusion in a subset of undifferentiated soft tissue sarcomas , NUTM1 gene rearrangements in colorectal sarcomas, and FN1 gene rearrangements in chondroid neoplasms underscores the relentless evolution of soft tissue tumor classification. Glioma-associated oncogene 1 ( GLI1 ), a transcription factor activated by the Sonic hedgehog pathway, plays a role in the development of various tumors, including gliomas, alveolar rhabdomyosarcomas, and osteosarcomas. GLI1 amplifications and gene fusions are also found in diverse mesenchymal tumors like pericytoma with t(7;12), gastroblastoma, plexiform fibromyxoma, and a new category of GLI1-altered mesenchymal neoplasms. This group includes “nested glomoid neoplasm,” a new tumor type with unique architecture, and a range of low to high-grade neoplasms, some resembling myoepithelial carcinoma. Pericytomas with t(7;12) and nested glomoid neoplasms have distinct morphologies and immunohistochemical profiles, expressing markers like S100, SMA, CDK4, and MDM2. GLI1 immunohistochemistry can aid in diagnosing these rare tumors, potentially eliminating the need for molecular testing . Integrating diagnostic biomarkers into the classification of soft tissue tumors has transformed the landscape of tumor diagnosis, prognosis, and treatment. It underscores the importance of a multidisciplinary approach, combining histopathology with molecular pathology, for the optimal management of patients with these neoplasms. Soft tissue tumors represent a diverse group of neoplasms with complex molecular underpinnings. Advances in molecular pathology have revolutionized our understanding and classification of these tumors, primarily due to the discovery and understanding of specific diagnostic biomarkers (see Table ). Lipogenic neoplasms : Atypical pleomorphic and spindle-cell lipomatous tumors, previously variants of well-differentiated liposarcomas, are now distinctively identified by the absence of MDM2 and CDK4 gene amplification and, notably, the deletion of 13q14 and loss of the RB1 gene in many cases. Fibroblastic and myofibroblastic neoplasms: The presence of the NCOA2 gene rearrangement, leading to AHRR::NCOA2 fusion in most angiofibroma of soft tissues , pinpoints its molecular pathology. Another critical discovery is the EWSR1::SMAD3 fusion in EWSR1 -positive fibroblastic tumors , providing clear diagnostic criteria. Fibrohistiocytic neoplasms: Molecular insights have led to a shift in classification. For instance, the once-termed malignant fibrous histiocytomas have been divided into multiple distinct entities. Vascular neoplasms: The identification of GNAQ or GNA14 gene mutations in anastomosing hemangioma and the discovery of two main fusion types in epithelioid hemangioendothelioma, WWTR1::CAMTA1 and YAP1::TFE3 , have significantly advanced our diagnostic accuracy. Smooth muscle neoplasms: The expression of the viral EBER RNA in Epstein-Barr virus–positive smooth muscle tumors , leading to MYC overexpression, underlines its diagnostic significance. Additionally, in most cases, inflammatory leiomyosarcomas showcase a near-haploid karyotype, further refining our diagnostic approach. Striated muscle neoplasms: Newly identified fusions like TFCP2 with FUS or EWSR1 and MEIS1::NCOA2 in rhabdomyosarcomas have shifted our understanding of their origins and aggressiveness. Osteochondrogenic neoplasms: For instance, soft tissue chondromas are now known to harbor FN1::FGFR gene fusions in up to 50% of cases . Neoplasms of the nerve sheath: The malignant melanotic tumor of the nerve sheath is linked to Carney’s complex in a significant proportion of cases, underpinned by the loss of the tumor suppressor gene PRKAR1A . Other soft tissue neoplasms: The discovery of NTRK gene rearrangements (reviewed in 6) has been groundbreaking due to their therapeutic implications. The presence of these rearrangements is a pivotal diagnostic criterion for certain tumors (Fig. ). Undifferentiated round cell sarcomas : The emergence of advanced high-throughput methodologies has significantly reshaped our understanding and categorization of small round cell sarcomas (SRCSs). This evolution, fueled by the integration of extensive genetic, epigenetic, and transcriptomic insights along with progressive clinicopathological data and experimental frameworks, culminated in the establishment of a novel chapter dedicated to “undifferentiated SRCSs of bone and soft tissue” in the 2020 WHO classification for soft tissue and bone tumors. As these technologies evolve, they are expected to uncover even more uncommon SRCS variants. Predominantly, the most common fusion-driven entities that resemble Ewing sarcoma in morphology include round cell sarcomas characterized by the fusion of EWSR1 or FUS with non-ETS family genes (notably EWSR1::NFATC2 , FUS::NFATC2 , and EWSR1::PATZ1 ), sarcomas with CIC rearrangements (primarily CIC::DUX4 ), and sarcomas exhibiting BCOR genetic changes (chiefly BCOR::CCNB3 ). The consequences of these fusions on intracellular signaling pathways emphasize the shift in our understanding of tumor biology based on molecular findings . After the 5th edition of the WHO classification (2020–2023), the realm of soft tissue tumor molecular pathology witnessed further revelations: Fibrogenic neoplasms: Sarcomas with KMT2A gene rearrangements and fibroblastic tumors with PRRX1::NCOA1 fusion have enriched the spectrum of soft tissue tumors . Fibrohistiocytic neoplasms: The distinction of soft tissue giant cell tumors from their bony counterparts based on the absence of H3-3/H3F3 gene mutations is another testament to the precision offered by molecular biomarkers . Striated muscle neoplasms: The inflammatory rhabdomyoblastic tumor, a newly recognized entity, embodies the influence of molecular pathology in refining our understanding of soft tissue tumor classification . Other soft tissue neoplasms: The identification of EWSR1::SSX2 fusion in a subset of undifferentiated soft tissue sarcomas , NUTM1 gene rearrangements in colorectal sarcomas, and FN1 gene rearrangements in chondroid neoplasms underscores the relentless evolution of soft tissue tumor classification. Glioma-associated oncogene 1 ( GLI1 ), a transcription factor activated by the Sonic hedgehog pathway, plays a role in the development of various tumors, including gliomas, alveolar rhabdomyosarcomas, and osteosarcomas. GLI1 amplifications and gene fusions are also found in diverse mesenchymal tumors like pericytoma with t(7;12), gastroblastoma, plexiform fibromyxoma, and a new category of GLI1-altered mesenchymal neoplasms. This group includes “nested glomoid neoplasm,” a new tumor type with unique architecture, and a range of low to high-grade neoplasms, some resembling myoepithelial carcinoma. Pericytomas with t(7;12) and nested glomoid neoplasms have distinct morphologies and immunohistochemical profiles, expressing markers like S100, SMA, CDK4, and MDM2. GLI1 immunohistochemistry can aid in diagnosing these rare tumors, potentially eliminating the need for molecular testing . Integrating diagnostic biomarkers into the classification of soft tissue tumors has transformed the landscape of tumor diagnosis, prognosis, and treatment. It underscores the importance of a multidisciplinary approach, combining histopathology with molecular pathology, for the optimal management of patients with these neoplasms. Mesenchymal bone neoplasms are a group of tumors originating from the bone’s mesenchymal tissue. These neoplasms vary in their aggressiveness, clinical presentation, histology, and genetics. Below is a comprehensive review of the different categories of these neoplasms and their associated molecular characteristics (see also Table ): Chondrogenic neoplasms Chondromyxoid fibroma : This neoplasm is linked to rearrangements of the GRM1 gene. Overexpression of GRM1 indicates the diagnosis of chondromyxoid fibromas . However, a small percentage does not show this overexpression, suggesting the possibility of other genetic alterations. Synovial chondromatosis : associated with the FN1::ACVR2A and ACVR2A::FN1 fusions . These fusions are present in most benign synovial chondromatosis and some malignant cases. Osteogenic neoplasms Osteoid osteoma : Characterized by the presence of FOS rearrangement in most cases . A common neoplastic spectrum with osteoblastoma and epithelioid hemangioma is postulated. The FOS family plays a crucial role in cellular transcription. Although the diagnosis of almost all osteoid osteomas does not require a demonstration of FOS rearrangements, their detection can be useful in selected cases in which a clear radiology-pathology correlation is missing. Osteoblastoma : Similar to osteoid osteoma, it shows rearrangement of FOS . Also observed, albeit less frequently is the rearrangement of FOSB . Giant cell–rich neoplasms Non-ossifying fibroma : A subset of these tumors originates due to neurofibromatosis type 1 and Jaffe-Campanacci syndrome. They are associated with mutations in the NF1 and KRAS genes. Additionally, they are characterized by mutations in KRAS and FGFR1 . Notochordal neoplasms Poorly differentiated chordoma : This neoplasm shows a homozygous SMARCB1/INI1 gene deletion. A small fraction of cases show gene loss without detectable mutation . Furthermore, some cases have a codeletion of the EWSR1 gene. Other bone neoplasms Adamantinoma : This neoplasm displays both numerical and structural chromosomopathies. They are associated with trisomies and chromosomal translocations. The dedifferentiation process in these neoplasms is linked to the loss of P53 and the acquisition of a complex karyotype. Identifying and understanding these neoplasms’ genetic and molecular alterations provide a deeper insight into their biology and potential therapeutic interventions. Molecular diagnostics have markedly enhanced our ability to classify and diagnose neoplastic entities, particularly with the refinement of tumor taxonomy. This progress is exemplified by the GENSARC study , which indicated that molecular investigations altered initial diagnostic conclusions in about 14% of sarcoma cases made by specialized pathologists. Let us delve deeper into the panorama of molecular diagnostic techniques, both traditional and avant-garde. Traditional molecular techniques (reviewed in ) Karyotyping: A foundational technique, karyotyping was paramount in identifying the earliest chromosomal deviations in sarcomas. Nevertheless, its resolution, limited to around five megabases, impedes the recognition of specific gene mutations, relegating it to a more ancillary role in contemporary diagnostics. Comparative genomic hybridization (CGH): This technique can spotlight DNA amplifications and deletions. However, its limited prowess in pinpointing point mutations and gene fusions has confined its use to specific cases and research contexts. Fluorescence in situ hybridization (FISH): Widely employed for detecting translocations and amplifications, FISH, in tandem with PCR, stands as a cornerstone in the current diagnostic landscape (Fig. ). Yet, its blind spots include an inability to discern discrete gene mutations and some proximate gene fusions (Fig. ). Polymerase chain reaction (PCR): A linchpin in many diagnostic labs, PCR has served as the molecular technique of choice for routine diagnostic procedures. However, it requires a prior understanding of the mutation under scrutiny and has constraints in detecting alternate partner genes. Modern molecular techniques Next-generation sequencing (NGS): A quantum leap in genetic analysis, NGS allows for the parallel sequencing of vast DNA fragments. Its precision is further honed with targeted versions such as “gene panels by hybridization and capture” and “massive amplicon sequencing,” both of which can accurately identify alternate fusion partner genes (Fig. ) . NanoString: A paradigm shift, this method quantifies RNA directly, bypassing the need for retrotranscription or prior amplification, using DNA probes adorned with fluorescent barcodes . Especially adept at analyzing subpar RNA samples, it parallels mass sequencing in identifying a diverse array of partner gene fusions. Non-targeted massive sequencing: Offering an expansive purview, these techniques can decode any genetic rearrangement without prior knowledge. This category encompasses the holistic RNA-Seq, WES, and WGS techniques. The “nanopore” method is a groundbreaking offshoot, which deduces mutations by discerning ionic current shifts as DNA fragments traverse a nanoporous protein membrane. Methylome studies: These studies probe methylation patterns beyond mere genetic material, offering a more resilient and granular analysis . Their binary approach, focusing on methylation or its absence, furnishes a unique lens for diagnostics. Recommendation for molecular testing in bone and soft tissue sarcomas Consider any molecular result in the right clinical and pathological context. Although some gene fusions are very specific to a particular tumor type (e.g. EWSR1::FLI1 in Ewing sarcoma), other gene fusions are much more non-specific, such as ETV6::NTRK3 , which can be seen not only in sarcomas but also in leukemias or carcinomas. Molecular findings should, therefore, never be evaluated in isolation, but always in the appropriate clinical and morphological context. Perform a foundational assessment with traditional techniques. Begin with FISH, which is widely used for detecting translocations and amplifications, especially when specific genetic alterations or translocations are suspected. Use PCR where a particular mutation is suspected, given its widespread use and the requirement of prior knowledge of the mutation. Make a judicious use of advanced analysis with modern techniques. Next-generation sequencing (NGS) should be considered, primarily with a targeted approach. These allow for a comprehensive genetic analysis, are adept at identifying alternative fusion partner genes, and could be invaluable when a broader genetic landscape needs examination. In cases with low-quality RNA samples or when a more expansive genetic view is needed, NanoString is recommended due to its ability to quantify RNA and identify diverse partner gene fusions directly. For sarcomas of uncertain classification or when a holistic view of the genetic material is required, non-targeted massive sequencing techniques like RNA-Seq, WES, and WGS can be employed. The “nanopore” method , while more avant-garde, can offer a unique perspective by deducing mutations from ionic current shifts. Have supplemental analysis available: Methylome studies can be considered an auxiliary diagnostic tool, especially when the genetic material is compromised, providing a robust and detailed analysis based on methylation patterns. Always have a holistic or integrative consideration: Despite the advancements in molecular diagnostics, the foundation of sarcoma diagnosis remains rooted in histopathological findings. Molecular pathology should be used as a complementary tool, enhancing the specificity and accuracy of the diagnosis (Fig. ). Be aware of cost and infrastructure: While modern techniques might seem resource-intensive, their potential efficiency, especially in complex sarcoma cases, could render them more cost-effective in the long run. When selecting a testing strategy, balancing the cost, available infrastructure, and diagnostic precision are essential. Immunohistochemistry can constitute an excellent surrogate of molecular genetics. Over the last decade, advancements in molecular genetics have revolutionized diagnostic approaches, leading to the development of novel, cost-effective, and rapid diagnostic tests using immunohistochemical stains. These new immunohistochemical markers are broadly classified into three categories: proteins indicative of genetic alterations such as PDGFRA, SMARCB1 [INI1], H3K27me3, SMARCA4 [BRG1], β-catenin, MDM2, MYC, RB1, CDK4, and SDHB; protein products resulting from gene fusions including STAT6, TFE3, ALK, FOSB, BCOR, DDIT3, SS18::SSX, CAMTA1, CCNB3, and pan-TRK; and diagnostic markers identified by gene expression profiling, such as MUC4, DOG1, NKX2-2, TLE1, SATB2, and ETV4. These advancements have significantly enhanced the speed and precision of diagnostics, particularly in the realm of sarcoma identification and classification . In summary, a layered approach, integrating traditional and modern techniques, can provide a comprehensive and accurate molecular diagnosis for bone and soft tissue sarcomas. Reporting The results of ancillary tests (e.g., immunohistochemistry (IHC) or molecular evaluations) should be included in the report where relevant. This is the case, for example, for the detection of translocations in round cell sarcomas, isocitrate dehydrogenase ( IDH1 and IDH2 ) mutations in conventional chondrosarcoma, and MDM2 amplification in low-grade intramedullary and parosteal osteosarcoma. The International Collaboration for Cancer Reporting provides guidelines for standardized pathology reporting of soft tissue sarcomas . It reminds us that molecular test results should be integrated into biopsy or resection reports of osteoarticular tumor pathology. ]) Karyotyping: A foundational technique, karyotyping was paramount in identifying the earliest chromosomal deviations in sarcomas. Nevertheless, its resolution, limited to around five megabases, impedes the recognition of specific gene mutations, relegating it to a more ancillary role in contemporary diagnostics. Comparative genomic hybridization (CGH): This technique can spotlight DNA amplifications and deletions. However, its limited prowess in pinpointing point mutations and gene fusions has confined its use to specific cases and research contexts. Fluorescence in situ hybridization (FISH): Widely employed for detecting translocations and amplifications, FISH, in tandem with PCR, stands as a cornerstone in the current diagnostic landscape (Fig. ). Yet, its blind spots include an inability to discern discrete gene mutations and some proximate gene fusions (Fig. ). Polymerase chain reaction (PCR): A linchpin in many diagnostic labs, PCR has served as the molecular technique of choice for routine diagnostic procedures. However, it requires a prior understanding of the mutation under scrutiny and has constraints in detecting alternate partner genes. Next-generation sequencing (NGS): A quantum leap in genetic analysis, NGS allows for the parallel sequencing of vast DNA fragments. Its precision is further honed with targeted versions such as “gene panels by hybridization and capture” and “massive amplicon sequencing,” both of which can accurately identify alternate fusion partner genes (Fig. ) . NanoString: A paradigm shift, this method quantifies RNA directly, bypassing the need for retrotranscription or prior amplification, using DNA probes adorned with fluorescent barcodes . Especially adept at analyzing subpar RNA samples, it parallels mass sequencing in identifying a diverse array of partner gene fusions. Non-targeted massive sequencing: Offering an expansive purview, these techniques can decode any genetic rearrangement without prior knowledge. This category encompasses the holistic RNA-Seq, WES, and WGS techniques. The “nanopore” method is a groundbreaking offshoot, which deduces mutations by discerning ionic current shifts as DNA fragments traverse a nanoporous protein membrane. Methylome studies: These studies probe methylation patterns beyond mere genetic material, offering a more resilient and granular analysis . Their binary approach, focusing on methylation or its absence, furnishes a unique lens for diagnostics. Consider any molecular result in the right clinical and pathological context. Although some gene fusions are very specific to a particular tumor type (e.g. EWSR1::FLI1 in Ewing sarcoma), other gene fusions are much more non-specific, such as ETV6::NTRK3 , which can be seen not only in sarcomas but also in leukemias or carcinomas. Molecular findings should, therefore, never be evaluated in isolation, but always in the appropriate clinical and morphological context. Perform a foundational assessment with traditional techniques. Begin with FISH, which is widely used for detecting translocations and amplifications, especially when specific genetic alterations or translocations are suspected. Use PCR where a particular mutation is suspected, given its widespread use and the requirement of prior knowledge of the mutation. Make a judicious use of advanced analysis with modern techniques. Next-generation sequencing (NGS) should be considered, primarily with a targeted approach. These allow for a comprehensive genetic analysis, are adept at identifying alternative fusion partner genes, and could be invaluable when a broader genetic landscape needs examination. In cases with low-quality RNA samples or when a more expansive genetic view is needed, NanoString is recommended due to its ability to quantify RNA and identify diverse partner gene fusions directly. For sarcomas of uncertain classification or when a holistic view of the genetic material is required, non-targeted massive sequencing techniques like RNA-Seq, WES, and WGS can be employed. The “nanopore” method , while more avant-garde, can offer a unique perspective by deducing mutations from ionic current shifts. Have supplemental analysis available: Methylome studies can be considered an auxiliary diagnostic tool, especially when the genetic material is compromised, providing a robust and detailed analysis based on methylation patterns. Always have a holistic or integrative consideration: Despite the advancements in molecular diagnostics, the foundation of sarcoma diagnosis remains rooted in histopathological findings. Molecular pathology should be used as a complementary tool, enhancing the specificity and accuracy of the diagnosis (Fig. ). Be aware of cost and infrastructure: While modern techniques might seem resource-intensive, their potential efficiency, especially in complex sarcoma cases, could render them more cost-effective in the long run. When selecting a testing strategy, balancing the cost, available infrastructure, and diagnostic precision are essential. Immunohistochemistry can constitute an excellent surrogate of molecular genetics. Over the last decade, advancements in molecular genetics have revolutionized diagnostic approaches, leading to the development of novel, cost-effective, and rapid diagnostic tests using immunohistochemical stains. These new immunohistochemical markers are broadly classified into three categories: proteins indicative of genetic alterations such as PDGFRA, SMARCB1 [INI1], H3K27me3, SMARCA4 [BRG1], β-catenin, MDM2, MYC, RB1, CDK4, and SDHB; protein products resulting from gene fusions including STAT6, TFE3, ALK, FOSB, BCOR, DDIT3, SS18::SSX, CAMTA1, CCNB3, and pan-TRK; and diagnostic markers identified by gene expression profiling, such as MUC4, DOG1, NKX2-2, TLE1, SATB2, and ETV4. These advancements have significantly enhanced the speed and precision of diagnostics, particularly in the realm of sarcoma identification and classification . In summary, a layered approach, integrating traditional and modern techniques, can provide a comprehensive and accurate molecular diagnosis for bone and soft tissue sarcomas. The results of ancillary tests (e.g., immunohistochemistry (IHC) or molecular evaluations) should be included in the report where relevant. This is the case, for example, for the detection of translocations in round cell sarcomas, isocitrate dehydrogenase ( IDH1 and IDH2 ) mutations in conventional chondrosarcoma, and MDM2 amplification in low-grade intramedullary and parosteal osteosarcoma. The International Collaboration for Cancer Reporting provides guidelines for standardized pathology reporting of soft tissue sarcomas . It reminds us that molecular test results should be integrated into biopsy or resection reports of osteoarticular tumor pathology. Managing discrepancies In cases where sarcoma diagnosis reveals discrepancies between results from different molecular techniques (like FISH vs NGS) or between molecular and histological/immunohistochemical findings, a multifaceted and cautious approach is recommended: Multidisciplinary review: Engage a multidisciplinary team including pathologists, molecular biologists, radiologists, and oncologists. This team can provide diverse perspectives and expertise, comprehensively analyzing all findings. Re-evaluate clinical and radiological data: Reassess the patient’s clinical history and radiological data. Sometimes, additional clinical context or imaging studies can provide insights that help reconcile conflicting results. Repeat or confirm tests: If feasible, repeat the tests that show discrepancies. For instance, if there is a mismatch between FISH and NGS results, consider repeating these tests or employing additional methodologies for confirmation. Integrate histological and molecular data: The histological context can sometimes provide essential insights that guide the interpretation of molecular results. Ensure that molecular findings are correlated with histological and immunohistochemical features. Consider technical limitations: Understand the limitations of each technique. For instance, FISH is highly specific but may miss broader genomic alterations that NGS can detect. Conversely, NGS is comprehensive but may miss focal alterations detectable by FISH. Consultation with external experts: In particularly challenging cases, seeking a second opinion from external experts or reference laboratories can be invaluable. Patient monitoring and follow-up: In cases of unresolved discrepancies, close monitoring of the patient with frequent follow-ups may be necessary. This approach can help detect any progression or changes that might clarify the diagnosis. Document and report findings: Careful documentation of all findings and the decision-making process is crucial. This can be valuable for future reference, especially if the patient’s clinical situation evolves. Continued research and learning: Stay updated with the latest research and advancements in sarcoma diagnostics. New discoveries and technologies might provide solutions to current diagnostic challenges. Setup of the pre-analytical phase in sarcomas Sarcomas, especially those arising from bone tissues, present unique challenges during the biopsy processing phase. Bone sarcomas, in particular, often require decalcification processes to prepare the tissue for histological examination. However, decalcification can adversely affect the quality of nucleic acids, complicating subsequent molecular analysis . This makes the choice of decalcification agent and duration of the process pivotal. In addition to this, the intrinsic nature of sarcomas being deep-seated tumors further complicates biopsy collection. Proper handling becomes paramount, given the diverse subtypes of sarcomas, each with distinct molecular profiles. Preserving RNA integrity in these samples is essential, especially when gene fusion detection, a hallmark of many sarcoma subtypes, is anticipated. As such, the pre-analytical phase requires careful orchestration of multiple steps, ensuring the best possible preservation of molecular details. Ensuring access to the tests of choice for sarcomas With over 50 diverse subtypes, sarcomas present a tapestry of unique genetic alterations. While choosing the proper test is vital (see the “ ” section), ensuring that these advanced diagnostic tools are equitably available to the general population becomes equally crucial. Whether it is the specificity of FISH for detecting specific translocations or the comprehensive capability of NGS to survey the broader genomic landscape, the real challenge lies in having access to these tests. Healthcare systems and policies must prioritize the widespread availability of these sophisticated diagnostics. This equitable distribution ensures that every patient, regardless of socio-economic status or geographical location, has a fighting chance at accurate diagnosis and targeted therapy. Moreover, a keen understanding of sarcoma histopathology and its potential molecular underpinnings underlines the importance of continuous training and updates for pathologists and technicians involved in sarcoma diagnostics. Optimizing the management of sarcoma samples Given the heterogeneity of sarcomas, it is essential to obtain representative tissue samples. Ensuring that molecular tests do not exhaust these samples, especially when repeated biopsies are not feasible, is paramount. Multigene tests, as opposed to unigene tests, ensure that the original paraffin block of the diagnostic biopsy is not exhausted by repetitively accessing it each time a single-gene test is needed. Circulating biomarkers in sarcomas The emerging field of liquid biopsies , which includes the analysis of circulating biomarkers such as cell-free DNA (cfDNA), circulating tumor DNA (ctDNA), circulating tumor cells (CTCs), and specific proteins, holds significant promise for sarcomas . These non-invasive tests, derived primarily from blood samples, can potentially provide invaluable insights into the molecular landscape of a sarcoma without the need for a traditional tissue biopsy. For sarcomas, these liquid biopsies could aid in early diagnosis, monitoring treatment responses, and detecting recurrences. They might even unveil potential therapeutic targets or resistance mechanisms in real time. However, the inherent rarity and heterogeneity of sarcomas pose distinct challenges. Given the myriad subtypes of sarcomas with unique genetic and molecular characteristics, standardizing and validating liquid biopsy protocols become a complex endeavor. Furthermore, due to the deep-seated nature of many sarcomas, the amount of ctDNA shed into the bloodstream might be lower than in more prevalent cancers, which can affect the sensitivity of these tests. Therefore, while liquid biopsies present a revolutionary avenue for sarcoma diagnostics and management, comprehensive research and methodological advancements are needed to realize their full potential. Immunotherapies and sarcomas While immunotherapies show promise in many cancers, their role in sarcomas is still evolving (reviewed in 34). Molecular pathologists play a pivotal role in researching the landscape of sarcomas, primarily in identifying and validating biomarkers that can guide immunotherapy. Through advanced techniques, they are adept at characterizing the prevalence of immune cells and discerning expression patterns of immune checkpoints like PD-1/PD-L1. These biomarkers, once validated, can be instrumental in determining the most suitable therapeutic strategies. However, given the complexity and heterogeneity of sarcomas, molecular pathology must continue its exploration and validation of new biomarkers to refine further and personalize immunotherapeutic interventions in these patients. Novel issues in sarcoma treatment As targeted therapies for sarcomas emerge, understanding the molecular drivers, resistance mechanisms, and potential combination strategies becomes essential for molecular pathologists. For instance, sequencing of receptor tyrosine kinases (RTKs) like KIT and PDGFRA in gastrointestinal stromal tumors (GISTs) can guide the use of targeted therapies like imatinib. However, as tumors might acquire resistance to these therapies, pathologists play a critical role in detecting secondary mutations that could necessitate a switch in treatment strategy. This deep molecular insight ensures precise initial treatment selection and dynamic therapy adjustments based on the tumor’s evolving molecular profile, optimizing patient outcomes. These observations underscore the necessity of tailored tumor profiling for each patient to pinpoint active signaling pathways, moving beyond blanket treatment approaches toward individualized, versatile treatment plans . Trials that match a specific therapy to shared oncogenic drivers across different diseases, like the CREATE trial, reflect this personalized approach . Furthermore, understanding patient-to-patient differences in drug metabolism and response can be instrumental in anticipating and counteracting resistance mechanisms . Multidisciplinary tumor boards for sarcomas Due to their intricate nature and myriad subtypes, sarcomas necessitate a collaborative approach to decision-making processes. Central to this collaboration is the multidisciplinary tumor board, where diverse specialists come together to discuss and design the optimal treatment plan for patients. Molecular pathologists play a pivotal role in these boards, as their detailed molecular insights can dictate the direction of treatment . For instance, if a molecular pathologist identifies a specific genetic mutation that makes a particular sarcoma subtype responsive to a targeted therapy, this information must be communicated in an accessible and understandable manner. Radiologists, for example, might need to understand the potential growth patterns or metastatic tendencies associated with that mutation. At the same time, surgical oncologists might adjust their strategies based on the predicted aggressiveness or behavior of the tumor. Additionally, medical oncologists can tailor their chemotherapeutic regimens based on these insights. Thus, effective communication within the board ensures that the patient receives a holistic, informed, and precise treatment strategy, maximizing therapeutic success and potentially improving outcomes. Setting up national NGS networks for sarcomas Creating a national framework for the molecular diagnostics of sarcomas is no small task, given the heterogeneity and intricacy of these tumors. Such networks provide standardized diagnostic protocols and ensure that even the less common sarcoma subtypes receive the attention they deserve. A stellar example of this approach’s success is seen in the efforts of the French sarcoma group, which has achieved remarkable progress in diagnosis and therapeutic strategies for sarcoma patients through their consolidated efforts. Similarly, Spain is making significant strides with projects such as IMPERAS ( Estudio del IMPacto En supervivencia y calidad de vida de la Revisión centralizada del diagnóstico Anatomopatológico en Sarcomas de partes blandas ; Study of the Impact on Survival and Quality of Life of Centralized Review of Pathologic Diagnosis in Soft Tissue Sarcomas) , aiming to streamline sarcoma diagnostics and research. This endeavor has gained momentum, especially with the additional support from AECC (Spanish Association Against Cancer), extending its reach and impact. These national initiatives underscore the importance of collaborative and standardized molecular diagnostic efforts in improving sarcoma patient outcomes. By leveraging the latest molecular insights and ensuring their widespread accessibility, these networks are pivotal in advancing sarcoma care nationally. In conclusion, the challenges in molecular pathology take on added intricacy in the realm of sarcomas due to their diversity and complexity. Addressing these issues requires a concerted effort, a deep understanding of sarcoma biology, and a commitment to continuous learning in this rapidly evolving field. In cases where sarcoma diagnosis reveals discrepancies between results from different molecular techniques (like FISH vs NGS) or between molecular and histological/immunohistochemical findings, a multifaceted and cautious approach is recommended: Multidisciplinary review: Engage a multidisciplinary team including pathologists, molecular biologists, radiologists, and oncologists. This team can provide diverse perspectives and expertise, comprehensively analyzing all findings. Re-evaluate clinical and radiological data: Reassess the patient’s clinical history and radiological data. Sometimes, additional clinical context or imaging studies can provide insights that help reconcile conflicting results. Repeat or confirm tests: If feasible, repeat the tests that show discrepancies. For instance, if there is a mismatch between FISH and NGS results, consider repeating these tests or employing additional methodologies for confirmation. Integrate histological and molecular data: The histological context can sometimes provide essential insights that guide the interpretation of molecular results. Ensure that molecular findings are correlated with histological and immunohistochemical features. Consider technical limitations: Understand the limitations of each technique. For instance, FISH is highly specific but may miss broader genomic alterations that NGS can detect. Conversely, NGS is comprehensive but may miss focal alterations detectable by FISH. Consultation with external experts: In particularly challenging cases, seeking a second opinion from external experts or reference laboratories can be invaluable. Patient monitoring and follow-up: In cases of unresolved discrepancies, close monitoring of the patient with frequent follow-ups may be necessary. This approach can help detect any progression or changes that might clarify the diagnosis. Document and report findings: Careful documentation of all findings and the decision-making process is crucial. This can be valuable for future reference, especially if the patient’s clinical situation evolves. Continued research and learning: Stay updated with the latest research and advancements in sarcoma diagnostics. New discoveries and technologies might provide solutions to current diagnostic challenges. Sarcomas, especially those arising from bone tissues, present unique challenges during the biopsy processing phase. Bone sarcomas, in particular, often require decalcification processes to prepare the tissue for histological examination. However, decalcification can adversely affect the quality of nucleic acids, complicating subsequent molecular analysis . This makes the choice of decalcification agent and duration of the process pivotal. In addition to this, the intrinsic nature of sarcomas being deep-seated tumors further complicates biopsy collection. Proper handling becomes paramount, given the diverse subtypes of sarcomas, each with distinct molecular profiles. Preserving RNA integrity in these samples is essential, especially when gene fusion detection, a hallmark of many sarcoma subtypes, is anticipated. As such, the pre-analytical phase requires careful orchestration of multiple steps, ensuring the best possible preservation of molecular details. With over 50 diverse subtypes, sarcomas present a tapestry of unique genetic alterations. While choosing the proper test is vital (see the “ ” section), ensuring that these advanced diagnostic tools are equitably available to the general population becomes equally crucial. Whether it is the specificity of FISH for detecting specific translocations or the comprehensive capability of NGS to survey the broader genomic landscape, the real challenge lies in having access to these tests. Healthcare systems and policies must prioritize the widespread availability of these sophisticated diagnostics. This equitable distribution ensures that every patient, regardless of socio-economic status or geographical location, has a fighting chance at accurate diagnosis and targeted therapy. Moreover, a keen understanding of sarcoma histopathology and its potential molecular underpinnings underlines the importance of continuous training and updates for pathologists and technicians involved in sarcoma diagnostics. Given the heterogeneity of sarcomas, it is essential to obtain representative tissue samples. Ensuring that molecular tests do not exhaust these samples, especially when repeated biopsies are not feasible, is paramount. Multigene tests, as opposed to unigene tests, ensure that the original paraffin block of the diagnostic biopsy is not exhausted by repetitively accessing it each time a single-gene test is needed. The emerging field of liquid biopsies , which includes the analysis of circulating biomarkers such as cell-free DNA (cfDNA), circulating tumor DNA (ctDNA), circulating tumor cells (CTCs), and specific proteins, holds significant promise for sarcomas . These non-invasive tests, derived primarily from blood samples, can potentially provide invaluable insights into the molecular landscape of a sarcoma without the need for a traditional tissue biopsy. For sarcomas, these liquid biopsies could aid in early diagnosis, monitoring treatment responses, and detecting recurrences. They might even unveil potential therapeutic targets or resistance mechanisms in real time. However, the inherent rarity and heterogeneity of sarcomas pose distinct challenges. Given the myriad subtypes of sarcomas with unique genetic and molecular characteristics, standardizing and validating liquid biopsy protocols become a complex endeavor. Furthermore, due to the deep-seated nature of many sarcomas, the amount of ctDNA shed into the bloodstream might be lower than in more prevalent cancers, which can affect the sensitivity of these tests. Therefore, while liquid biopsies present a revolutionary avenue for sarcoma diagnostics and management, comprehensive research and methodological advancements are needed to realize their full potential. While immunotherapies show promise in many cancers, their role in sarcomas is still evolving (reviewed in 34). Molecular pathologists play a pivotal role in researching the landscape of sarcomas, primarily in identifying and validating biomarkers that can guide immunotherapy. Through advanced techniques, they are adept at characterizing the prevalence of immune cells and discerning expression patterns of immune checkpoints like PD-1/PD-L1. These biomarkers, once validated, can be instrumental in determining the most suitable therapeutic strategies. However, given the complexity and heterogeneity of sarcomas, molecular pathology must continue its exploration and validation of new biomarkers to refine further and personalize immunotherapeutic interventions in these patients. As targeted therapies for sarcomas emerge, understanding the molecular drivers, resistance mechanisms, and potential combination strategies becomes essential for molecular pathologists. For instance, sequencing of receptor tyrosine kinases (RTKs) like KIT and PDGFRA in gastrointestinal stromal tumors (GISTs) can guide the use of targeted therapies like imatinib. However, as tumors might acquire resistance to these therapies, pathologists play a critical role in detecting secondary mutations that could necessitate a switch in treatment strategy. This deep molecular insight ensures precise initial treatment selection and dynamic therapy adjustments based on the tumor’s evolving molecular profile, optimizing patient outcomes. These observations underscore the necessity of tailored tumor profiling for each patient to pinpoint active signaling pathways, moving beyond blanket treatment approaches toward individualized, versatile treatment plans . Trials that match a specific therapy to shared oncogenic drivers across different diseases, like the CREATE trial, reflect this personalized approach . Furthermore, understanding patient-to-patient differences in drug metabolism and response can be instrumental in anticipating and counteracting resistance mechanisms . Due to their intricate nature and myriad subtypes, sarcomas necessitate a collaborative approach to decision-making processes. Central to this collaboration is the multidisciplinary tumor board, where diverse specialists come together to discuss and design the optimal treatment plan for patients. Molecular pathologists play a pivotal role in these boards, as their detailed molecular insights can dictate the direction of treatment . For instance, if a molecular pathologist identifies a specific genetic mutation that makes a particular sarcoma subtype responsive to a targeted therapy, this information must be communicated in an accessible and understandable manner. Radiologists, for example, might need to understand the potential growth patterns or metastatic tendencies associated with that mutation. At the same time, surgical oncologists might adjust their strategies based on the predicted aggressiveness or behavior of the tumor. Additionally, medical oncologists can tailor their chemotherapeutic regimens based on these insights. Thus, effective communication within the board ensures that the patient receives a holistic, informed, and precise treatment strategy, maximizing therapeutic success and potentially improving outcomes. Creating a national framework for the molecular diagnostics of sarcomas is no small task, given the heterogeneity and intricacy of these tumors. Such networks provide standardized diagnostic protocols and ensure that even the less common sarcoma subtypes receive the attention they deserve. A stellar example of this approach’s success is seen in the efforts of the French sarcoma group, which has achieved remarkable progress in diagnosis and therapeutic strategies for sarcoma patients through their consolidated efforts. Similarly, Spain is making significant strides with projects such as IMPERAS ( Estudio del IMPacto En supervivencia y calidad de vida de la Revisión centralizada del diagnóstico Anatomopatológico en Sarcomas de partes blandas ; Study of the Impact on Survival and Quality of Life of Centralized Review of Pathologic Diagnosis in Soft Tissue Sarcomas) , aiming to streamline sarcoma diagnostics and research. This endeavor has gained momentum, especially with the additional support from AECC (Spanish Association Against Cancer), extending its reach and impact. These national initiatives underscore the importance of collaborative and standardized molecular diagnostic efforts in improving sarcoma patient outcomes. By leveraging the latest molecular insights and ensuring their widespread accessibility, these networks are pivotal in advancing sarcoma care nationally. In conclusion, the challenges in molecular pathology take on added intricacy in the realm of sarcomas due to their diversity and complexity. Addressing these issues requires a concerted effort, a deep understanding of sarcoma biology, and a commitment to continuous learning in this rapidly evolving field. Comprehensive genome profiling The limited availability of effective targeted treatments for most types of sarcomas can, in part, be addressed by expanding our knowledge of the genetic mutations found in mesenchymal tumors . These tumors have not been as extensively studied as those originating from epithelial and neural tissues. Up until now, genetic research in sarcomas, including projects like The Cancer Genome Atlas (TCGA), has been constrained by small sample sizes, a focus on early-stage disease, a narrow range of histologies (like liposarcoma, leiomyosarcoma, and osteosarcoma), and a lack of comprehensive clinical data. Gounder et al. present the genetic characteristics of 7494 patients across 44 different sarcoma subtypes. This research sheds light on the potential clinical benefits of utilizing advanced genetic sequencing techniques for diagnosing, prognosis, and managing connective tissue malignancies. For example, the initial diagnoses made by sarcoma pathologists were altered in 4% of patients following the analysis of genomic sequencing results. In these particular cases, two patients initially diagnosed with leiomyosarcoma were reclassified as having dedifferentiated liposarcoma, leading to a change in their treatment approach to include investigational MDM2 or CDK4 inhibitors. Additionally, a third patient initially diagnosed with sarcoma NOS was identified as having PEComa due to TSC2 loss and was recommended treatment with an mTOR inhibitor. Lastly, a fourth patient with MPNST was reclassified as having synovial sarcoma based on detecting an SS18::SSX2 fusion, leading to an evaluation for NY-ESO-1-based T-cell therapy. In this study, 31.7% of patients had actionable genetic alterations influencing treatment decisions. Actionability definitions varied, highlighting evolving criteria. Genomic profiling informed therapy choices in 29% of patients, but access barriers persisted. The NCI-MATCH study exemplified the gap between genomic research and rare cancer care, emphasizing the need for equity in precision testing and improved clinical trial access. Molecular profiling holds immense potential for advancing sarcoma patient treatment. Tyrosine kinase inhibitors (TKIs) have become a staple in addressing sarcomas like GIST, where mutations in KIT and PDGFRA genes drive tumorigenesis. TKIs, such as imatinib, effectively target these mutations, but resistance often emerges due to secondary KIT or PDGFRA mutations . In other soft tissue sarcomas (STSs), approved targeted therapies are limited to TKIs like pazopanib, which may not effectively target sarcoma stem cells and can lead to resistance. Combining TKIs with inhibitors of other signaling pathways, such as IGF1R/IR or MEK, has been proposed to overcome resistance. Additionally, phosphoproteomic profiling has identified HSP90 inhibition as a potential strategy to overcome resistance . Furthermore, many studies highlight the promise of kinase inhibitors such as larotrectenib in treating NTRK -fusion-positive sarcomas and DNA minor groove-binding agents like trabectedin or mithramycin as potential inhibitors of EWSR1::FLI1 -mediated transcription. While mithramycin faced toxicity challenges, second-generation analogs like EC-8042 offer clinical possibilities . A fascinating pilot study explores the potential of point-of-care nanopore sequencing for methylation-based sarcoma classification, aiming to overcome limitations associated with existing commercial arrays . The customized nanopore pipeline shows promise in diagnosing 11 sarcoma tumor types promptly. However, broader validation across tumor types and centers and statistical refinement are needed. An expanded classifier incorporating multiple data layers is expected to enhance accuracy. This advancement could lead to quicker, point-of-care sarcoma diagnosis and insights into sarcoma biology through methylation patterns, copy-number alteration, and translocation detection. These findings underscore the importance of comprehensive genomic profiling to identify activated signaling pathways, paving the way for patient-specific treatment regimens and biomarker-guided trials. Understanding interpatient pharmacokinetic variability is also crucial for predicting and addressing resistance. Molecular profiling is poised to usher in a new era of tailored and effective sarcoma treatments. However, given the potential constraints in terms of costs and resources, it is essential to establish a strategic approach for the prudent utilization of NGS and molecular profiling in sarcoma management. Artificial intelligence and molecular pathology The field of diagnostic pathology has become increasingly complex due to advances in both histomorphological and molecular profiling. Pathology has evolved to play a crucial role in diagnosing diseases, estimating prognoses, and predicting precision therapies . This has led to high expectations for applying artificial intelligence (AI) and machine learning, which can analyze intricate data quantitatively and standardizedly, improving diagnostic accuracy. Recent research has shown that predicting specific molecular characteristics is possible based on tissues’ physical appearance or morphology. For example, a recent study from the French Sarcoma Group showcases the potential of deep learning (DL) in predicting the progression risk of localized GIST. While refinement is necessary for clinical application, DL can detect somatic mutations, notably the specific PDGFRA exon 18 D842V mutation. This DL method can expedite treatment decisions, particularly for patients with intermediate-risk Miettinen GIST, who typically do not require adjuvant treatment, and high-risk Miettinen GIST, where avapratinib treatment is essential. Furthermore, this approach may prove invaluable in regions with limited access to molecular techniques and serve as a research tool for discovering fresh histological features from whole slide images. Enhancing pathologist visibility through involvement in the sarcoma patient experience The involvement of sarcoma pathologists in the diagnostic process enhances the sarcoma patient experience and sheds light on the pathologist’s vital role . Their expertise is indispensable in the context of precision medicine and shared decision-making. Sarcoma pathologists ensure accurate diagnosis and classification in the correct turnaround time, which is critical for tailoring precise treatments . Pathologists could contribute to a collaborative network by actively engaging with patients, fostering knowledge sharing and synergies. This approach promotes equality in precision medicine. Including patients in advisory boards empowers them in treatment decisions and drives strategies for implementing precision medicine. Interactive meetings facilitate community engagement and promote awareness of the pathologist’s essential contributions to sarcoma care. This holistic approach improves patient outcomes and elevates the visibility and significance of the sarcoma pathologist’s work. Ten advice/action points for the next generation of sarcoma pathologists Stay updated on evolving subtypes: Keep learning about emerging sarcoma subtypes and their molecular profiles to ensure accurate diagnosis and classification. Promote data integration: Advocate for seamless integration of molecular data into pathology reports, facilitating informed treatment decisions and enhancing patient care. Foster effective communication: Promote open and effective communication within multidisciplinary teams to ensure a cohesive approach to sarcoma care and research. Advocate for resources: Advocate for adequate resources, including staffing, equipment, and digital pathology infrastructure, to support clinical responsibilities and research commitments. Collaborate actively in research: Actively participate in sarcoma research initiatives, contributing expertise in pathology to advance diagnostic techniques and treatment modalities. Prioritize workload management: Implement strategies for effective workload management, enabling pathologists to balance clinical duties with research involvement. Embrace digital pathology and AI: Embrace digital pathology and artificial intelligence tools, staying updated on their integration into diagnostics and research to enhance accuracy and efficiency. Mentor future pathologists: Dedicate time to mentor and educate the next generation of sarcoma pathologists, ensuring the continuity of expertise in the field. Engage in continuous learning: Commit to ongoing learning and professional development to remain at the forefront of sarcoma pathology advancements. Advocate for patient-centered care: Champion a patient-centered approach within multidisciplinary teams, ensuring patients’ unique needs and perspectives are considered in research and care decisions. By addressing these action points, sarcoma pathologists can overcome the challenges they face and continue to play a pivotal role in advancing research and enhancing the care of sarcoma patients. The limited availability of effective targeted treatments for most types of sarcomas can, in part, be addressed by expanding our knowledge of the genetic mutations found in mesenchymal tumors . These tumors have not been as extensively studied as those originating from epithelial and neural tissues. Up until now, genetic research in sarcomas, including projects like The Cancer Genome Atlas (TCGA), has been constrained by small sample sizes, a focus on early-stage disease, a narrow range of histologies (like liposarcoma, leiomyosarcoma, and osteosarcoma), and a lack of comprehensive clinical data. Gounder et al. present the genetic characteristics of 7494 patients across 44 different sarcoma subtypes. This research sheds light on the potential clinical benefits of utilizing advanced genetic sequencing techniques for diagnosing, prognosis, and managing connective tissue malignancies. For example, the initial diagnoses made by sarcoma pathologists were altered in 4% of patients following the analysis of genomic sequencing results. In these particular cases, two patients initially diagnosed with leiomyosarcoma were reclassified as having dedifferentiated liposarcoma, leading to a change in their treatment approach to include investigational MDM2 or CDK4 inhibitors. Additionally, a third patient initially diagnosed with sarcoma NOS was identified as having PEComa due to TSC2 loss and was recommended treatment with an mTOR inhibitor. Lastly, a fourth patient with MPNST was reclassified as having synovial sarcoma based on detecting an SS18::SSX2 fusion, leading to an evaluation for NY-ESO-1-based T-cell therapy. In this study, 31.7% of patients had actionable genetic alterations influencing treatment decisions. Actionability definitions varied, highlighting evolving criteria. Genomic profiling informed therapy choices in 29% of patients, but access barriers persisted. The NCI-MATCH study exemplified the gap between genomic research and rare cancer care, emphasizing the need for equity in precision testing and improved clinical trial access. Molecular profiling holds immense potential for advancing sarcoma patient treatment. Tyrosine kinase inhibitors (TKIs) have become a staple in addressing sarcomas like GIST, where mutations in KIT and PDGFRA genes drive tumorigenesis. TKIs, such as imatinib, effectively target these mutations, but resistance often emerges due to secondary KIT or PDGFRA mutations . In other soft tissue sarcomas (STSs), approved targeted therapies are limited to TKIs like pazopanib, which may not effectively target sarcoma stem cells and can lead to resistance. Combining TKIs with inhibitors of other signaling pathways, such as IGF1R/IR or MEK, has been proposed to overcome resistance. Additionally, phosphoproteomic profiling has identified HSP90 inhibition as a potential strategy to overcome resistance . Furthermore, many studies highlight the promise of kinase inhibitors such as larotrectenib in treating NTRK -fusion-positive sarcomas and DNA minor groove-binding agents like trabectedin or mithramycin as potential inhibitors of EWSR1::FLI1 -mediated transcription. While mithramycin faced toxicity challenges, second-generation analogs like EC-8042 offer clinical possibilities . A fascinating pilot study explores the potential of point-of-care nanopore sequencing for methylation-based sarcoma classification, aiming to overcome limitations associated with existing commercial arrays . The customized nanopore pipeline shows promise in diagnosing 11 sarcoma tumor types promptly. However, broader validation across tumor types and centers and statistical refinement are needed. An expanded classifier incorporating multiple data layers is expected to enhance accuracy. This advancement could lead to quicker, point-of-care sarcoma diagnosis and insights into sarcoma biology through methylation patterns, copy-number alteration, and translocation detection. These findings underscore the importance of comprehensive genomic profiling to identify activated signaling pathways, paving the way for patient-specific treatment regimens and biomarker-guided trials. Understanding interpatient pharmacokinetic variability is also crucial for predicting and addressing resistance. Molecular profiling is poised to usher in a new era of tailored and effective sarcoma treatments. However, given the potential constraints in terms of costs and resources, it is essential to establish a strategic approach for the prudent utilization of NGS and molecular profiling in sarcoma management. The field of diagnostic pathology has become increasingly complex due to advances in both histomorphological and molecular profiling. Pathology has evolved to play a crucial role in diagnosing diseases, estimating prognoses, and predicting precision therapies . This has led to high expectations for applying artificial intelligence (AI) and machine learning, which can analyze intricate data quantitatively and standardizedly, improving diagnostic accuracy. Recent research has shown that predicting specific molecular characteristics is possible based on tissues’ physical appearance or morphology. For example, a recent study from the French Sarcoma Group showcases the potential of deep learning (DL) in predicting the progression risk of localized GIST. While refinement is necessary for clinical application, DL can detect somatic mutations, notably the specific PDGFRA exon 18 D842V mutation. This DL method can expedite treatment decisions, particularly for patients with intermediate-risk Miettinen GIST, who typically do not require adjuvant treatment, and high-risk Miettinen GIST, where avapratinib treatment is essential. Furthermore, this approach may prove invaluable in regions with limited access to molecular techniques and serve as a research tool for discovering fresh histological features from whole slide images. The involvement of sarcoma pathologists in the diagnostic process enhances the sarcoma patient experience and sheds light on the pathologist’s vital role . Their expertise is indispensable in the context of precision medicine and shared decision-making. Sarcoma pathologists ensure accurate diagnosis and classification in the correct turnaround time, which is critical for tailoring precise treatments . Pathologists could contribute to a collaborative network by actively engaging with patients, fostering knowledge sharing and synergies. This approach promotes equality in precision medicine. Including patients in advisory boards empowers them in treatment decisions and drives strategies for implementing precision medicine. Interactive meetings facilitate community engagement and promote awareness of the pathologist’s essential contributions to sarcoma care. This holistic approach improves patient outcomes and elevates the visibility and significance of the sarcoma pathologist’s work. Stay updated on evolving subtypes: Keep learning about emerging sarcoma subtypes and their molecular profiles to ensure accurate diagnosis and classification. Promote data integration: Advocate for seamless integration of molecular data into pathology reports, facilitating informed treatment decisions and enhancing patient care. Foster effective communication: Promote open and effective communication within multidisciplinary teams to ensure a cohesive approach to sarcoma care and research. Advocate for resources: Advocate for adequate resources, including staffing, equipment, and digital pathology infrastructure, to support clinical responsibilities and research commitments. Collaborate actively in research: Actively participate in sarcoma research initiatives, contributing expertise in pathology to advance diagnostic techniques and treatment modalities. Prioritize workload management: Implement strategies for effective workload management, enabling pathologists to balance clinical duties with research involvement. Embrace digital pathology and AI: Embrace digital pathology and artificial intelligence tools, staying updated on their integration into diagnostics and research to enhance accuracy and efficiency. Mentor future pathologists: Dedicate time to mentor and educate the next generation of sarcoma pathologists, ensuring the continuity of expertise in the field. Engage in continuous learning: Commit to ongoing learning and professional development to remain at the forefront of sarcoma pathology advancements. Advocate for patient-centered care: Champion a patient-centered approach within multidisciplinary teams, ensuring patients’ unique needs and perspectives are considered in research and care decisions. By addressing these action points, sarcoma pathologists can overcome the challenges they face and continue to play a pivotal role in advancing research and enhancing the care of sarcoma patients. Molecular pathologists in bone and soft tissue sarcomas (BSTPath) face various challenges. The pre-analytical phase is intricate due to the need for decalcification in bone sarcomas, impacting nucleic acid quality. Biopsy collection is complicated because of deep-seated tumors and diverse subtypes, emphasizing RNA integrity preservation. Ensuring equitable access to advanced diagnostics for the 50 + sarcoma subtypes is crucial, emphasizing the role of healthcare systems in availability. Managing sarcoma samples effectively, especially when repeated biopsies are not possible, is vital, with multigene tests preserving original diagnostic biopsy blocks. Liquid biopsies and analyzing circulating biomarkers offer promise but require standardization and validation due to sarcoma rarity and heterogeneity. Immunotherapy’s evolving role in sarcomas necessitates ongoing biomarker validation by molecular pathologists. As targeted therapies emerge, pathologists detect resistance mechanisms, enabling personalized treatment plans. Multidisciplinary tumor boards are essential for sarcoma care, with molecular insights guiding treatment decisions. National NGS networks streamline diagnostics, exemplified by French and Spanish initiatives. Molecular pathology advances through comprehensive genome profiling, kinase inhibitors, and innovative diagnostic techniques like nanopore sequencing. Artificial intelligence aids histomorphological and molecular analysis, improving accuracy. Involving sarcoma pathologists in patient care enhances the patient experience and their visibility. The focus of future sarcoma molecular pathologists will include staying updated, promoting data integration, fostering communication, advocating for resources, active research involvement, workload management, embracing digital pathology and AI, mentoring, continuous learning, and supporting patient-centered care. These efforts address BSTPath challenges, shaping the future of sarcoma care. |
Internal medicine clerks’ motivation in an online course: a mixed-methods study | b251fab7-deeb-4db2-a037-913f47525a31 | 11703516 | Internal Medicine[mh] | Clinical clerkships are characterized by learning in an environment that involves direct patient care, which is often located at an inpatient ward. This ward is a dynamic, challenging and complex learning environment. The ward has a primary aim to provide patient care, however students (nurses and physicians) also need to learn, which can be a difficult combination. Clinical workplace learning (WPL) may face several challenges that lead to suboptimal training. For example, time pressure leads to suboptimal support, observation and assessment of learners by supervising clinicians, while the diagnostic process tends to become more complex over time . Also, students, teachers and patients may experience a lack of sustained relationships in clinical training . In a previous study, we described the implementation of a Small Private Online Course (SPOC) in a blended curriculum in our clerkship internal medicine that aimed to train clinical skills and competencies . Blended learning refers to a deliberate blending of face-to-face and online learning, with the goal of stimulating and supporting learning, which can improve learning and learner satisfaction when thoughtfully designed . A SPOC is a distinct form of an online course, that only allows a limited number of eligible students, mostly used locally with on-campus students . Besides their feasibility and potential to improve clinical skills, SPOCs also permit flexible learning, online collaboration and social interaction, and might therefore be a helpful addition to the clinical learning environment and its challenges . Although the effectiveness of online learning has been extensively investigated, teachers often experience difficulties in student motivation in online courses . Motivation can be either extrinsic or intrinsic, while intrinsic motivation is preferred as it refers to doing something because it is inherently interesting or enjoyable . Research has shown that intrinsic reasons for behavior can improve learning, achievement and well-being of the learner . Extrinsic motivation According to Self-Determination Theory, three basic psychological needs can stimulate intrinsic motivation: the need for autonomy, competence and relatedness . With the intention to increase student motivation in our SPOC, its design was based on the self-determination theory . In summary, relatedness is stimulated in the SPOC with integration of peer-feedback sessions, peer-graded assignments with rubrics, group assignments and discussion forums within the SPOC. Besides mandatory contents, the SPOC has optional assignments and resources that aim to enhance students’ autonomy and just-in-time learning. Peer-feedback, self-assessment and tracking and ranking of course activity are integrated to train and assess the students’ competencies. The SPOC includes virtual patients and virtual reality ward rounds simulation to enhance learning in an authentic clinical context. Thus, the SPOC holds online learning activities that not only focus on extensive clinical training but also aim to make learners feel autonomous and more competent, and increase feelings of social cohesion. Up to now, it remains unclear how these learning activities affect intrinsic motivation, in which context that may be and which factors underlie this process. The aim of the current study was to investigate if the distinct learning activities within the SPOC influence clerks’ intrinsic motivation, and in which context. Context Our SPOC was offered as a blended learning course, combining on-campus education with extensive, online content. Clerks participation in the SPOC was mandatory during the full twelve weeks of their clerkships at the internal medicine departments. The learning activities in the SPOC were based on SDT principles and focused on promoting autonomy, competence and relatedness amongst clerks and their clinical teachers through flexible and substantially optional lessons, authentic assignments and collaborative learning activities. Each learning activity was constructively aligned with it’s own learning outcomes and assessment. The weekly learning activities included authentic virtual patient cases, online (group) assignments, videos, discussion fora with peer-feedback, quizzes, and innovative ways of education, like virtual reality and augmented reality. Whereas the clerks were geographically distributed, they all participated in the SPOC program parallel to their peers. Every month, they had to complete four learning activities, of which some were mandatory (for example the group assignments) for creating an optimal setting for collaboration, while optional lessons aimed to individualize the SPOC to the distinct clerks’ needs. The SPOC included an online peer-feedback instruction video that was mandatory to all clerks. A clinical teacher logged into the online environment to show teacher presence in the course and randomly evaluate peer-feedback contents. Study population The study focused on first year Master students in Medicine who had just started their clinical clerkship Internal Medicine. Every month, 20–25 clerks start their clerkship at LUMC or other teaching hospitals in the area. Clerks work in different departments over a period of twelve weeks. Data collection took place over this period. Design This article represents a mixed-methods study to examine the effect of the SPOC’s learning activities on intrinsic motivation, by means of a questionnaire with quantitative analysis, and semi-structured small-group interviews with an in-depth qualitative analysis. For the quantitative analysis, the validated Intrinsic Motivation Inventory (IMI) by Deci and Ryan has been adapted to measure the students motivation in the online learning environment. It had five subscales: 1. interest/enjoyment (which is considered the self-report measure of intrinsic motivation) 2. perceived choice, 3. perceived competence (which are the same as the SDT principles), 4. perceived value/usefulness (which helps people to internalize and become self-regulating with respect to activities, e.g ), and 5. Relatedness . This questionnaire is presented in Appendix I. In addition, an interview protocol in Dutch was developed to answer the research question. The questions in this protocol focused on which learning activities motivate clerks for clinical workplace learning in the SPOC (see Appendix II for English translation of the interview protocol). Data were derived from semi-structured small group interviews, which were conducted by a trained interviewer (FvB). The study protocol has been reviewed and approved by the Educational Research Review Board of the Leiden University Medical Center in the meeting on 12 March 2019 (file number: OEC/ERRB/20190312). Participation was voluntary and based on informed consent (see Appendix III for informed consent form). We analyzed the interview data by means of thematic analysis . We did not define a priori themes. Data collection Student enrolment started in September 2019 and ended by December 2020. Enrollment was interrupted January 2020–August 2020 due to the Covid pandemic which resulted in significant adaptations to the clerkship during that period. All clerks entering the clinical clerkship Internal Medicine were eligible for enrollment. The students received an online questionnaire during their clerkship (which they could fill out in the 11 th week at the latest) or received a hardcopy in week 11, after which the interviews were planned. They rated each question/statement from 1 to 7 on a 7-point Likert scale, where 1 indicated the total disagreement and 7 full agreement with the particular statement. The statements were transformed into five measure scales: interest/enjoyment (Q1–6), perceived competence (Q7–12), perceived choice (Q13–18), value/usefulness (Q19–23), and relatedness (Q24–30). FvB conducted the small group interviews in week 12 of the clerkship (the first author is directly involved in student-assessment). The authors used maximum variation sampling, which is a form of purposeful sampling, aiming to include the greatest variety of opinions . Therefore, the students with the highest and lowest scores on the IMI were invited to be interviewed. The interviews were audio-recorded and transcribed verbatim. Data analysis Quantitative data from the IMI questionnaire were exported, reversed where applicable (see items in italics in Appendix I) and analyzed using SPSS (version 25) for descriptive and reliability analysis. Data were checked for normality, and Cronbach’s alfa was calculated for all subgroups. We performed a inductive thematic analysis on the qualitative data from the interviews . Data were exported into a coding template by using qualitative data analysis software (ATLAS.ti Scientific Software Development GmbH, Germany, version 22.0.11.0). EH and FvB subsequently read through the data and independently carried out a first coding and subcoding cycle (in vivo) followed by second cycle coding into themes . They individually developed an initial version of the code book based on a subset of interview data, which they discussed with each other until consensus was reached on the initial codes. The code book was then modified iteratively with the remainder of the data subsequently. Emerging subthemes in the code groups were organized in six main themes that were related to the students’ experiences of the SPOC (e.g., collaboration and utility) and SPOC system-related factors. Thereafter a final coding code book was defined. Our SPOC was offered as a blended learning course, combining on-campus education with extensive, online content. Clerks participation in the SPOC was mandatory during the full twelve weeks of their clerkships at the internal medicine departments. The learning activities in the SPOC were based on SDT principles and focused on promoting autonomy, competence and relatedness amongst clerks and their clinical teachers through flexible and substantially optional lessons, authentic assignments and collaborative learning activities. Each learning activity was constructively aligned with it’s own learning outcomes and assessment. The weekly learning activities included authentic virtual patient cases, online (group) assignments, videos, discussion fora with peer-feedback, quizzes, and innovative ways of education, like virtual reality and augmented reality. Whereas the clerks were geographically distributed, they all participated in the SPOC program parallel to their peers. Every month, they had to complete four learning activities, of which some were mandatory (for example the group assignments) for creating an optimal setting for collaboration, while optional lessons aimed to individualize the SPOC to the distinct clerks’ needs. The SPOC included an online peer-feedback instruction video that was mandatory to all clerks. A clinical teacher logged into the online environment to show teacher presence in the course and randomly evaluate peer-feedback contents. The study focused on first year Master students in Medicine who had just started their clinical clerkship Internal Medicine. Every month, 20–25 clerks start their clerkship at LUMC or other teaching hospitals in the area. Clerks work in different departments over a period of twelve weeks. Data collection took place over this period. This article represents a mixed-methods study to examine the effect of the SPOC’s learning activities on intrinsic motivation, by means of a questionnaire with quantitative analysis, and semi-structured small-group interviews with an in-depth qualitative analysis. For the quantitative analysis, the validated Intrinsic Motivation Inventory (IMI) by Deci and Ryan has been adapted to measure the students motivation in the online learning environment. It had five subscales: 1. interest/enjoyment (which is considered the self-report measure of intrinsic motivation) 2. perceived choice, 3. perceived competence (which are the same as the SDT principles), 4. perceived value/usefulness (which helps people to internalize and become self-regulating with respect to activities, e.g ), and 5. Relatedness . This questionnaire is presented in Appendix I. In addition, an interview protocol in Dutch was developed to answer the research question. The questions in this protocol focused on which learning activities motivate clerks for clinical workplace learning in the SPOC (see Appendix II for English translation of the interview protocol). Data were derived from semi-structured small group interviews, which were conducted by a trained interviewer (FvB). The study protocol has been reviewed and approved by the Educational Research Review Board of the Leiden University Medical Center in the meeting on 12 March 2019 (file number: OEC/ERRB/20190312). Participation was voluntary and based on informed consent (see Appendix III for informed consent form). We analyzed the interview data by means of thematic analysis . We did not define a priori themes. Student enrolment started in September 2019 and ended by December 2020. Enrollment was interrupted January 2020–August 2020 due to the Covid pandemic which resulted in significant adaptations to the clerkship during that period. All clerks entering the clinical clerkship Internal Medicine were eligible for enrollment. The students received an online questionnaire during their clerkship (which they could fill out in the 11 th week at the latest) or received a hardcopy in week 11, after which the interviews were planned. They rated each question/statement from 1 to 7 on a 7-point Likert scale, where 1 indicated the total disagreement and 7 full agreement with the particular statement. The statements were transformed into five measure scales: interest/enjoyment (Q1–6), perceived competence (Q7–12), perceived choice (Q13–18), value/usefulness (Q19–23), and relatedness (Q24–30). FvB conducted the small group interviews in week 12 of the clerkship (the first author is directly involved in student-assessment). The authors used maximum variation sampling, which is a form of purposeful sampling, aiming to include the greatest variety of opinions . Therefore, the students with the highest and lowest scores on the IMI were invited to be interviewed. The interviews were audio-recorded and transcribed verbatim. Quantitative data from the IMI questionnaire were exported, reversed where applicable (see items in italics in Appendix I) and analyzed using SPSS (version 25) for descriptive and reliability analysis. Data were checked for normality, and Cronbach’s alfa was calculated for all subgroups. We performed a inductive thematic analysis on the qualitative data from the interviews . Data were exported into a coding template by using qualitative data analysis software (ATLAS.ti Scientific Software Development GmbH, Germany, version 22.0.11.0). EH and FvB subsequently read through the data and independently carried out a first coding and subcoding cycle (in vivo) followed by second cycle coding into themes . They individually developed an initial version of the code book based on a subset of interview data, which they discussed with each other until consensus was reached on the initial codes. The code book was then modified iteratively with the remainder of the data subsequently. Emerging subthemes in the code groups were organized in six main themes that were related to the students’ experiences of the SPOC (e.g., collaboration and utility) and SPOC system-related factors. Thereafter a final coding code book was defined. Intrinsic motivation inventory Seventy eight out of 184 eligible clerks filled out the IMI questionnaire (response rate 42%), and 76 finished all sub-questions. All categories’ scores showed a normal distribution. The questionnaire yielded good internal consistency with a Cronbach’s alpha of >0.8 for the items except for perceived relatedness, which did not improve on recalculation with items deleted. reports an overview of the mean scores, the standard deviation, Cronbach’s alpha and number of respondents. The average score was 3.87 (range 3.53–4.20) on a 7-point Likert scale. The scores for perceived competence and value/usefulness scored best for the course (4.02 and 4.20 respectively). The lower scores were found in the perceived interest/enjoyment, perceived choice and perceived relatedness items. Group interviews Four group interviews were conducted, involving 14 clerks in total, after which no new themes were identified from the data. Data analysis of the interview transcripts revealed six distinct themes; A. Collaboration with peers, B. Perceived usefulness, C. SPOC-related factors, D. Workload, E. Motivation and F. Performance. represents the themes and sub-themes. We believe themes A, B, E and F are directly related to the IMI questionnaire and intrinsic motivation and will be highlighted in the following section more extensively. Theme A: collaboration with peers Although collaborative learning by group assignments and online peer-feedback were important subjects during the development of the SPOC, the peer-feedback and group-assignments did not work well for the students. Peer-feedback was often short and low quality, and the students did not prefer to be dependent on their peers for feedback instead of direct, automatically provided feedback. They did not feel (the need for) relatedness in the SPOC’s assignments. Well, I had the feeling we were doing [the assignment] in a group, however in my opinion it did not really add anything. In my case, I knew it was [a group assignment], however in the end, I was the only one to admit the patient, so I wrote the patient case and added another student’s’ name on it. In the third month, we had to make a treatment plan for diabetes, and then all peers just answered “yes, I agree”. Theme B. Perceived usefulness The students appreciated learning activities that they found informative, meaning it had learning value to them, and particularly when it had additional value to clinical practice and they felt they acquired relevant knowledge from it. From the interview we learn that the online patient cases, videos and the group assignment to prepare and solve a clinical problem were valued most (although the format of making the case together was often problematic). The students thought the essays were less useful, due to lack of feedback and no quality of learning from it in their opinion. The 3D virtual ward tour was perceived to be fun, but not so informative. Yeah, I think most valuable is when you first practice with a [virtual] patient case, and you think, ah, I might see this in the Emergency Room when I am a future physician, and that you will make you, let’s say, delve into a case, so that added value is mostly motivating for me. [… .] Yes, you know, it adds something, if you have seen a patient and report the case to another student as a [clinical] problem, it is a different way to look at a patient case. So I like the assignment, however the way it was implemented is not convenient. Theme C and D: workload and SPOC related Workload and lack of time to finish the assignments were mentioned by the students. As a result, they had to make the assignments at home. However, the SPOC was valued as an attractive substitute for clinical work in case the students had spare time on the wards. Time constraints were a major theme in the interviews. As a result, amongst other reasons, the students appreciated shorter and fast-forward through assignments (virtual patient cases, quizzes) with direct feedback over longer assignments like essays. Theme E: motivation This theme includes motivation aspects that directly influence one’s own learning process. Assignments that were marked as interesting or challenging were finalized because the students wanted to, and this particularly was the case for the optional learning activities. They indicated that feelings of autonomy, finding challenge and interest in the assignments are important contributors that stimulate working in the SPOC and doing optional assignments. Inequality was felt by the first group enrolled in the SPOC (their senior peers did not have to make it yet) and this resulted in a negative motivational state for participating in the SPOC. Ehm, there was a whole list of [the optional lessons], so anyone could find something that would be interesting to them, and it was optional so I liked it more, so you could just do the parts that you liked, any time, without [deadlines], so ehm yeah, in the end I did more of that compared to the [mandatory] SPOC assignment. […] I think now we are Master students, you just do it for yourself, so at this point, for me it is complementary, so I take it seriously and consider it as a learning experience, and for me that is really motivating. Theme F: performance Direct feedback, for example found in short quizzes, was appreciated. However, it appeared that the current system of tracking the interns’ progression and performance in the SPOC was not sufficient. They had a need for more assessment of their competency, either automatically or from the clinical teacher. The students indicated that for some activities, they could get some rewards (badges and points) by just clicking the buttons, which was not a motivation to put much effort in it. ‘[… .] there is no added value to commit more time and effort [in answering the question], you can just type, let’s say “smurf”, to put it bluntly.’ Ehm, yeah I don’t know, it might be just a bit ‘childish’ between brackets to get those badges and points, and I did not really understand when you get more or less points, it might be clearer when you just see the checkboxes ticked for lessons you already made [… .] it was unclear to me to see what you had finished, because sometimes you already got this badge or points, while only clicking the activity to see what you have to do. Seventy eight out of 184 eligible clerks filled out the IMI questionnaire (response rate 42%), and 76 finished all sub-questions. All categories’ scores showed a normal distribution. The questionnaire yielded good internal consistency with a Cronbach’s alpha of >0.8 for the items except for perceived relatedness, which did not improve on recalculation with items deleted. reports an overview of the mean scores, the standard deviation, Cronbach’s alpha and number of respondents. The average score was 3.87 (range 3.53–4.20) on a 7-point Likert scale. The scores for perceived competence and value/usefulness scored best for the course (4.02 and 4.20 respectively). The lower scores were found in the perceived interest/enjoyment, perceived choice and perceived relatedness items. Four group interviews were conducted, involving 14 clerks in total, after which no new themes were identified from the data. Data analysis of the interview transcripts revealed six distinct themes; A. Collaboration with peers, B. Perceived usefulness, C. SPOC-related factors, D. Workload, E. Motivation and F. Performance. represents the themes and sub-themes. We believe themes A, B, E and F are directly related to the IMI questionnaire and intrinsic motivation and will be highlighted in the following section more extensively. Theme A: collaboration with peers Although collaborative learning by group assignments and online peer-feedback were important subjects during the development of the SPOC, the peer-feedback and group-assignments did not work well for the students. Peer-feedback was often short and low quality, and the students did not prefer to be dependent on their peers for feedback instead of direct, automatically provided feedback. They did not feel (the need for) relatedness in the SPOC’s assignments. Well, I had the feeling we were doing [the assignment] in a group, however in my opinion it did not really add anything. In my case, I knew it was [a group assignment], however in the end, I was the only one to admit the patient, so I wrote the patient case and added another student’s’ name on it. In the third month, we had to make a treatment plan for diabetes, and then all peers just answered “yes, I agree”. Theme B. Perceived usefulness The students appreciated learning activities that they found informative, meaning it had learning value to them, and particularly when it had additional value to clinical practice and they felt they acquired relevant knowledge from it. From the interview we learn that the online patient cases, videos and the group assignment to prepare and solve a clinical problem were valued most (although the format of making the case together was often problematic). The students thought the essays were less useful, due to lack of feedback and no quality of learning from it in their opinion. The 3D virtual ward tour was perceived to be fun, but not so informative. Yeah, I think most valuable is when you first practice with a [virtual] patient case, and you think, ah, I might see this in the Emergency Room when I am a future physician, and that you will make you, let’s say, delve into a case, so that added value is mostly motivating for me. [… .] Yes, you know, it adds something, if you have seen a patient and report the case to another student as a [clinical] problem, it is a different way to look at a patient case. So I like the assignment, however the way it was implemented is not convenient. Theme C and D: workload and SPOC related Workload and lack of time to finish the assignments were mentioned by the students. As a result, they had to make the assignments at home. However, the SPOC was valued as an attractive substitute for clinical work in case the students had spare time on the wards. Time constraints were a major theme in the interviews. As a result, amongst other reasons, the students appreciated shorter and fast-forward through assignments (virtual patient cases, quizzes) with direct feedback over longer assignments like essays. Theme E: motivation This theme includes motivation aspects that directly influence one’s own learning process. Assignments that were marked as interesting or challenging were finalized because the students wanted to, and this particularly was the case for the optional learning activities. They indicated that feelings of autonomy, finding challenge and interest in the assignments are important contributors that stimulate working in the SPOC and doing optional assignments. Inequality was felt by the first group enrolled in the SPOC (their senior peers did not have to make it yet) and this resulted in a negative motivational state for participating in the SPOC. Ehm, there was a whole list of [the optional lessons], so anyone could find something that would be interesting to them, and it was optional so I liked it more, so you could just do the parts that you liked, any time, without [deadlines], so ehm yeah, in the end I did more of that compared to the [mandatory] SPOC assignment. […] I think now we are Master students, you just do it for yourself, so at this point, for me it is complementary, so I take it seriously and consider it as a learning experience, and for me that is really motivating. Theme F: performance Direct feedback, for example found in short quizzes, was appreciated. However, it appeared that the current system of tracking the interns’ progression and performance in the SPOC was not sufficient. They had a need for more assessment of their competency, either automatically or from the clinical teacher. The students indicated that for some activities, they could get some rewards (badges and points) by just clicking the buttons, which was not a motivation to put much effort in it. ‘[… .] there is no added value to commit more time and effort [in answering the question], you can just type, let’s say “smurf”, to put it bluntly.’ Ehm, yeah I don’t know, it might be just a bit ‘childish’ between brackets to get those badges and points, and I did not really understand when you get more or less points, it might be clearer when you just see the checkboxes ticked for lessons you already made [… .] it was unclear to me to see what you had finished, because sometimes you already got this badge or points, while only clicking the activity to see what you have to do. Although collaborative learning by group assignments and online peer-feedback were important subjects during the development of the SPOC, the peer-feedback and group-assignments did not work well for the students. Peer-feedback was often short and low quality, and the students did not prefer to be dependent on their peers for feedback instead of direct, automatically provided feedback. They did not feel (the need for) relatedness in the SPOC’s assignments. Well, I had the feeling we were doing [the assignment] in a group, however in my opinion it did not really add anything. In my case, I knew it was [a group assignment], however in the end, I was the only one to admit the patient, so I wrote the patient case and added another student’s’ name on it. In the third month, we had to make a treatment plan for diabetes, and then all peers just answered “yes, I agree”. The students appreciated learning activities that they found informative, meaning it had learning value to them, and particularly when it had additional value to clinical practice and they felt they acquired relevant knowledge from it. From the interview we learn that the online patient cases, videos and the group assignment to prepare and solve a clinical problem were valued most (although the format of making the case together was often problematic). The students thought the essays were less useful, due to lack of feedback and no quality of learning from it in their opinion. The 3D virtual ward tour was perceived to be fun, but not so informative. Yeah, I think most valuable is when you first practice with a [virtual] patient case, and you think, ah, I might see this in the Emergency Room when I am a future physician, and that you will make you, let’s say, delve into a case, so that added value is mostly motivating for me. [… .] Yes, you know, it adds something, if you have seen a patient and report the case to another student as a [clinical] problem, it is a different way to look at a patient case. So I like the assignment, however the way it was implemented is not convenient. Workload and lack of time to finish the assignments were mentioned by the students. As a result, they had to make the assignments at home. However, the SPOC was valued as an attractive substitute for clinical work in case the students had spare time on the wards. Time constraints were a major theme in the interviews. As a result, amongst other reasons, the students appreciated shorter and fast-forward through assignments (virtual patient cases, quizzes) with direct feedback over longer assignments like essays. This theme includes motivation aspects that directly influence one’s own learning process. Assignments that were marked as interesting or challenging were finalized because the students wanted to, and this particularly was the case for the optional learning activities. They indicated that feelings of autonomy, finding challenge and interest in the assignments are important contributors that stimulate working in the SPOC and doing optional assignments. Inequality was felt by the first group enrolled in the SPOC (their senior peers did not have to make it yet) and this resulted in a negative motivational state for participating in the SPOC. Ehm, there was a whole list of [the optional lessons], so anyone could find something that would be interesting to them, and it was optional so I liked it more, so you could just do the parts that you liked, any time, without [deadlines], so ehm yeah, in the end I did more of that compared to the [mandatory] SPOC assignment. […] I think now we are Master students, you just do it for yourself, so at this point, for me it is complementary, so I take it seriously and consider it as a learning experience, and for me that is really motivating. Direct feedback, for example found in short quizzes, was appreciated. However, it appeared that the current system of tracking the interns’ progression and performance in the SPOC was not sufficient. They had a need for more assessment of their competency, either automatically or from the clinical teacher. The students indicated that for some activities, they could get some rewards (badges and points) by just clicking the buttons, which was not a motivation to put much effort in it. ‘[… .] there is no added value to commit more time and effort [in answering the question], you can just type, let’s say “smurf”, to put it bluntly.’ Ehm, yeah I don’t know, it might be just a bit ‘childish’ between brackets to get those badges and points, and I did not really understand when you get more or less points, it might be clearer when you just see the checkboxes ticked for lessons you already made [… .] it was unclear to me to see what you had finished, because sometimes you already got this badge or points, while only clicking the activity to see what you have to do. This study explores the impact of the SPOC’s assignments on students’ intrinsic motivation. It shows that their perception of value/usefulness was fair, and this seemed to depend on whether or not the knowledge obtained from the course is complementary to their experiences in clinical practice. We argue that the assignments with the highest perception of utility (virtual patient cases, videos and preparing and solving a clinical problem in a group) are situated in a more authentic clinical context, than the assignments considered less useful (writing an essay for example). This perception may be explained by the fact that knowledge transfer across the online environment to the clinical workplace, may be enhanced by these authentic and contextual learning characteristics . Although online learning is not equivalent to real patients and cannot replace clinical workplace learning, other studies demonstrate that virtual patients for example, in the right situation, can effectively improve knowledge and skills, such as clinical reasoning . Perception of the utility of a task directly stimulates motivation . The expectancy-value theory argues that achievement motivation can be explained by the learners’ beliefs about how well they will do on the activity and the extent to which they value the activity . The item competence also scored fair in the questionnaire. From the interviews we learn that providing more insight in one’s performance may improve the feeling of competence. Direct feedback, such as obtained in the quizzes with grades, seems to obviously meet the student needs more than peer-feedback. The first question is if the SPOC invites the students sufficiently to desire for task mastery, competence and setting their own objectives instead of getting grades. Besides, the current curriculum’s focus on grades and achievement could be culprit to priming the students towards this attitude. Both factors could affect the quality of learner motivation . Peer-feedback, and collaboration with peers in general, was suboptimal and this is also reflected the lower score in the IMI item relatedness. The interview shows that the lack of feeling related in the SPOC may be caused by the fact that the peer feedback and feelings of coherence in the online learning environment were not ideal. Optimalisation of peer feedback in online environments is a more general challenge, also found in other studies. Peer feedback is considered inadequate when students lack knowledge or are not critical, and they may prefer teacher feedback . They may also perceive peer feedback as a mandatory task merely, while not understanding its importance in their own learning process and that of their peers. It might seem an attractive option to remove all peer-feedback sessions in the SPOC. However, in the right situation, online interaction, discussion and feedback may be associated with better learning outcomes . The question is how this can be optimally organized in the SPOC. Other studies describe that student’s guidance in giving peer feedback, their awareness of the requirements of assessment feedback, and training and clarification of the role the student takes in the feedback process are key principles of effective feedback . Although the provision and reception of peer feedback was a prespecified learning objective in our SPOC, and students receive an online training, it may be valuable to guide the students better in their task, and give a clearer explanation of its purpose . The group assignments to prepare and solve a patient case consistently received good ratings, despite organizational issues. Students could generate and solve their own patient case in this assignment, which was challenging and interesting to them. Research shows that students value assignments that involve generating their own questions in an online course. Such an assignment can also stimulate active participation and relatedness between students . Therefore, generating assignments for other students might be a motivating task for students, and this might explain the positive appreciation of this specific learning activity in the SPOC. Our study shows that students were especially motivated to be involved in the SPOC’s by activities that were challenging, interesting and optional (i.e., not mandatory). The latter increased their feelings of autonomy. From literature we learn that feeling autonomy, challenge and interest in learning assignments are directly related to motivation . However, the questionnaire results demonstrate that perceptions of interest/enjoyment and perceived choice were low in the SPOC. This finding may call for a shift towards assignments that students enjoy (short assignments/quizzes, virtual patient cases, video’s and solving patient cases, instead of essays) and more course flexibility. The finding imply that perceived usefulness, competence and autonomy could be improved by adjusting the flexibility of deadlines and improve just-in-time learning. So should we strive for solely optional assignments, without any deadlines? The problem is that totally self-paced courses have the fewest assessment options and typically require automated grading with less or no options for peer-feedback, which is not desired as described in one of the paragraphs above . Moreover, exclusively optional assignments may lead to loss of the blended curriculum, from which we know it can have a positive impact on mastery of core material . To our knowledge, this is the first study evaluating student motivation in a medical SPOC, and the first study using the IMI for addressing intrinsic motivation in the context of an online course. Limitations Due to COVID regulations, there might have been a difference or change in student ‘characteristics’ during the study inclusion. There has been a considerable variety within the groups pre-, during, and post-COVID, because the students have been exposed to COVID restrictions to a greater or lesser extent. It was not possible to make and compare equivalent groups to compare the study data, so this may have had unknown effects on our outcomes. Furthermore, feelings of inequality existed among the students in the first group that was interviewed, since they were the first group to be enrolled in the SPOC, while their peers were not required to participate. These feelings might have influenced their motivational state and thus their motivational intensity . We did not find any relevant deviations in the quantitative IMI data, however we noticed more negative perceptions in the first group interview. At last, the authors cannot exclude a non-responder bias, possibly influencing the study data due to questionnaire completion by students with the highest levels of motivation. However, small group interviews included students with high as well as low levels of motivation, showing that both groups were represented in the IMI. Due to COVID regulations, there might have been a difference or change in student ‘characteristics’ during the study inclusion. There has been a considerable variety within the groups pre-, during, and post-COVID, because the students have been exposed to COVID restrictions to a greater or lesser extent. It was not possible to make and compare equivalent groups to compare the study data, so this may have had unknown effects on our outcomes. Furthermore, feelings of inequality existed among the students in the first group that was interviewed, since they were the first group to be enrolled in the SPOC, while their peers were not required to participate. These feelings might have influenced their motivational state and thus their motivational intensity . We did not find any relevant deviations in the quantitative IMI data, however we noticed more negative perceptions in the first group interview. At last, the authors cannot exclude a non-responder bias, possibly influencing the study data due to questionnaire completion by students with the highest levels of motivation. However, small group interviews included students with high as well as low levels of motivation, showing that both groups were represented in the IMI. Our study may give tools for other teachers to develop online assignments with positive outcomes with regard to student motivation. It implies that motivation can be optimized by creating useful, authentic cases that aid students obtaining clinical skills that can be directly transferred into clinical practice. The study confirms that students can be motivated by feelings of autonomy, challenging assignments that they find interesting, and student-generated assignments. It also showed the need for tracking the students’ online performance being required for their feelings of competence. Although carefully designed for giving and perceiving peer feedback, our SPOC did not overcome the known challenges of online collaboration. Appendices_Med Ed Online .docx |
High expression of eukaryotic elongation factor 1‐alpha‐2 in lung adenocarcinoma is associated with poor prognosis | e05528a3-8494-496c-bcbb-6f8358c3c2e3 | 11551808 | Anatomy[mh] | Lung cancer is the most common form of cancer worldwide and accounts for the highest cancer‐related mortality. Among lung cancers, adenocarcinoma is the most common histological subtype. Adenocarcinoma is considered to progress stepwise from atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA) and lepidic adenocarcinoma. In 1995, Noguchi et al. demonstrated that AIS has an extremely favorable outcome, with a 5‐year survival rate of 100%, whereas invasive carcinoma, even in the early stage, has a poorer outcome. To clarify the molecular mechanisms involved in the malignant progression of early‐stage lung adenocarcinoma we compared the differences in gene expression profiles between AIS and early but invasive adenocarcinoma. Using quantitative proteome analysis, Dai et al. have shown that eukaryotic elongation factor 1‐alpha‐2 (eEF1A2) is overexpressed in early but invasive adenocarcinoma, in comparison to AIS. eEF1A2 and eEF1A1 are two isoforms of eukaryotic elongation factor 1 alpha (eEF1A). eEF1A is a subunit of the eukaryotic elongation factor 1 complex, and its main function is delivering aminoacylated tRNAs to the A site of the ribosome in the peptide chain elongation phase. eEF1A2 is located on chromosome 20q13 whereas eEF1A1 is located on chromosome 6q13. eEF1A1 and eEF1A2 have 93% identical amino acid sequences, but the expression pattern of the isoforms differs. eEF1A1 is expressed almost ubiquitously, except in neurons, heart and skeletal muscle, in the fetal phase and then its expression declines gradually and is replaced by eEF1A2 during development. , , Although the reason for this organ‐specific distribution of the two isoforms remains largely unknown, Chambers and Newbery have shown that, in mouse, absence of eEF1A2 leads to degeneration of motor neurons and early death. , On the other hand, eEF1A2 mutation has been reported in human patients with epileptic encephalopathy/intellectual disability. Among various malignancies, eEF1A2 is overexpressed in breast cancer, , ovarian cancer, , hepatocellular carcinoma, , prostate cancer, plasmacytoma, pancreatic cancer , and lung cancer. , , , , Gene amplification of eEF1A2 has been demonstrated in ovarian tumors , and lung cancer, , and its amplification has also been reported in prostate cancer cell lines. eEF1A2 has been considered as active oncogene, as it is overexpressed in various malignant tumor tissues, and has been shown to confer tumorigenicity in a fibroblast cell line. Many studies have indicated that eEF1A2 has an anti‐apoptosis function, , , , , and is responsible for actin cytoskeletal rearrangement, , , acinar morphogenesis, and epithelial‐mesenchymal transformation. Some studies have indicated that eEF1A2 activates the Akt pathway. , , , , , The relationship between eEF1A2 overexpression and prognosis in lung cancer is still under debate, as various studies have shown poorer , , or favorable outcome in patients with tumors over‐expressing eEF1A2. However, as these studies examined advanced cases, they did not include early‐stage lung adenocarcinoma, such as AIS. In the present study, we investigated eEF1A2 protein expression in surgically resected lung adenocarcinomas using immunohistochemistry, and compared the results between AIS, MIA, and invasive adenocarcinoma. We also studied the clinicopathological implications of eEF1A2 expression. In addition, we demonstrated eEF1A2 gene amplification in lung adenocarcinoma using fluorescence in situ hybridization (FISH) analysis. Clinical samples Clinical samples of heart, skeletal muscle, lung, liver and kidney were obtained from two autopsy cases at the University of Tsukuba Hospital, for Western blot analysis. Frozen surgical specimens of lung adenocarcinoma used in Western blot analysis were obtained from materials that had been resected between 2014 and 2017 at the University of Tsukuba Hospital (Ibaraki, Japan) (Supporting Information S1: Table ). All the samples had been obtained at University of Tsukuba Hospital with appropriate informed consent. The fresh tissues had been frozen and stored at −80°C.For immunohistochemical analysis, we prepared tumor tissue microarray (TMA) slides consisting of 175 formalin‐fixed and paraffin‐ embedded (FFPE) tumor tissues that had been resected between 1999 and 2007 (Table ). Histopathological classification was based on the WHO Classification of Thoracic Tumors, 5th edition. Invasive non‐mucinous adenocarcinomas were classified into subgroups (lepidic, acinar, papillary, micropapillary, solid) on the basis of the predominant histological patterns. The pathological stages of these 175 tumors were evaluated according to the UICC TNM Classification of Malignant Tumors, 8th edition. Epidermal growth factor receptor (EGFR) mutation profiles and the results of ALK oncoprotein immunohistochemistry were collected from the clinical database (Supporting Information S1: Table ). This study was approved by the ethics committee of the University of Tsukuba Hospital (No. H27‐205). Confirmation of antibody specificity Western blotting was performed using the protocol reported previously. To verify the specificity of the antibodies against eEF1A1 and eEF1A2, Western blotting was performed using monoclonal anti‐eEF1A1 antibody (ab157455; Abcam), polyclonal anti‐eEF1A2 antibody (GTX102326, GeneTex), and mouse monoclonal anti‐GAPDH antibody (sc‐32233, Santa Cruz Biotechnology). eEF1A1 and eEF1A2 recombinant proteins (Abnova) were used as positive controls. Cell culture and transfection using siRNA A549 (lung adenocarcinoma cell line) was maintained in DMEM (Fujifilm Wako Pure Chemical) supplemented with 10% FBS under 5% CO 2 at 37°C. eEF1A2‐specific siRNA (Stealth RNAi, Thermo Fisher Scientific) (Supporting Information S1: Table ), lipofectamine RNAiMAX (Thermo Fisher scientific), and OPTI‐MEM (Thermo Fisher Scientific) were added to the well and incubated for 20 min at room temperature. A549 cells were then seeded into the well containing the siRNA complex. The final siRNA concentration was 10 nM. The cells were incubated at 37°C in a CO 2 incubator for 48 h, and then collected for protein extraction. Stealth RNAi™ GCDuplex #2 (Thermo Fisher) was used as a siRNA negative control. Immunohistochemical analysis The TMA consisting of 175 lung adenocarcinoma specimens was subjected to immunohistochemical and clinicopathological analysis. A 2‐mm core of each tumor was sampled and made into a tumor tissue array block. When a tumor showed mixed histological patterns, the tumor core was sampled from the predominant histological pattern. For example, the tumor tissue core of “lepidic adenocarcinoma” was sampled from the lepidic component of the tumor tissue. Then, 3‐μm‐thick sections were cut from the array blocks and deparaffinized and dehydrated. Antigen retrieval was performed by autoclave in 10 mM citric acid buffer (pH 6) at 115°C for 10 min. Immunohistochemistry was performed using a Histostainer 36 A (Nichirei Biosciences). The sections were incubated with anti‐eEF1A1 antibody diluted 1:200, or with anti‐eEF1A2 antibody diluted 1:100, for 30 min at room temperature. These antibodies against eEF1A1 and eEF1A2 were also used for Western blotting. The sections were subsequently incubated with the secondary antibody (Dako REAL EnVision Detection System; Agilent Technologies) and detected with DAB (Dako, DAB + Liquid, Agilent Technologies), and counterstained with hematoxylin. In cases where more than 10% of the tumor cells were stained, we judged the tumor to be “positive,” as reported previously elsewhere. Quantitative genomic PCR Genomic DNA was extracted from 10‐μm‐thick paraffin embedded sections by digestion with proteinase K (Qiagen), followed by use of a magLEAD (Precision System Science). Oligonucleotide primers for EEF1A2 were designed using Primer3 ( http://primer3.sourceforge.net/ ). GAPDH was used as an internal control gene for normalization. Quantitative PCR analysis was carried out using SYBR Premix Ex Taq (Perfect Real Time; Takara Bio). The PCR reactions were carried out using a StepOnePlus Real‐Time PCR System (Thermo Fisher Scientific) at 95°C for 30 s followed by 40 cycles of 95°C for 5 s and 60°C for 31 s. A ratio (tumor ⁄ normal) of ≥1.5 was defined as representing gene amplification. Fluorescence in situ hybridization (FISH) FISH was performed using the protocol reported previously. The 5‐μm‐thick serial sections from FFPE tissues were subjected to dual‐color FISH using an EEF1A2‐CEN20p probe (GSP Lab.). Using a fluorescence microscope (Keyence) with single interference filter sets for green (FITC), red (TexRed), and blue (DAPI), FISH signals were enumerated in non‐overlapping tumor cell nuclei. Statistical analysis For statistical analysis, we used the SPSS Statistics package Version 26 (IBM). Correlations with clinicopathological features were analyzed using the chi‐squared test. Survival curves were calculated using the Kaplan‐Meier method and assessed using the log‐rank test. The endpoint of this study was disease‐free survival. Survival time was from the date of resection to the date of recurrence. The relationship between eEF1A2 amplification by qPCR and immunohistochemical positivity was examined with the Mann‐Whitney test. Univariate and multivariate analyses were carried out using the Cox proportional hazards model. Clinical samples of heart, skeletal muscle, lung, liver and kidney were obtained from two autopsy cases at the University of Tsukuba Hospital, for Western blot analysis. Frozen surgical specimens of lung adenocarcinoma used in Western blot analysis were obtained from materials that had been resected between 2014 and 2017 at the University of Tsukuba Hospital (Ibaraki, Japan) (Supporting Information S1: Table ). All the samples had been obtained at University of Tsukuba Hospital with appropriate informed consent. The fresh tissues had been frozen and stored at −80°C.For immunohistochemical analysis, we prepared tumor tissue microarray (TMA) slides consisting of 175 formalin‐fixed and paraffin‐ embedded (FFPE) tumor tissues that had been resected between 1999 and 2007 (Table ). Histopathological classification was based on the WHO Classification of Thoracic Tumors, 5th edition. Invasive non‐mucinous adenocarcinomas were classified into subgroups (lepidic, acinar, papillary, micropapillary, solid) on the basis of the predominant histological patterns. The pathological stages of these 175 tumors were evaluated according to the UICC TNM Classification of Malignant Tumors, 8th edition. Epidermal growth factor receptor (EGFR) mutation profiles and the results of ALK oncoprotein immunohistochemistry were collected from the clinical database (Supporting Information S1: Table ). This study was approved by the ethics committee of the University of Tsukuba Hospital (No. H27‐205). Western blotting was performed using the protocol reported previously. To verify the specificity of the antibodies against eEF1A1 and eEF1A2, Western blotting was performed using monoclonal anti‐eEF1A1 antibody (ab157455; Abcam), polyclonal anti‐eEF1A2 antibody (GTX102326, GeneTex), and mouse monoclonal anti‐GAPDH antibody (sc‐32233, Santa Cruz Biotechnology). eEF1A1 and eEF1A2 recombinant proteins (Abnova) were used as positive controls. A549 (lung adenocarcinoma cell line) was maintained in DMEM (Fujifilm Wako Pure Chemical) supplemented with 10% FBS under 5% CO 2 at 37°C. eEF1A2‐specific siRNA (Stealth RNAi, Thermo Fisher Scientific) (Supporting Information S1: Table ), lipofectamine RNAiMAX (Thermo Fisher scientific), and OPTI‐MEM (Thermo Fisher Scientific) were added to the well and incubated for 20 min at room temperature. A549 cells were then seeded into the well containing the siRNA complex. The final siRNA concentration was 10 nM. The cells were incubated at 37°C in a CO 2 incubator for 48 h, and then collected for protein extraction. Stealth RNAi™ GCDuplex #2 (Thermo Fisher) was used as a siRNA negative control. The TMA consisting of 175 lung adenocarcinoma specimens was subjected to immunohistochemical and clinicopathological analysis. A 2‐mm core of each tumor was sampled and made into a tumor tissue array block. When a tumor showed mixed histological patterns, the tumor core was sampled from the predominant histological pattern. For example, the tumor tissue core of “lepidic adenocarcinoma” was sampled from the lepidic component of the tumor tissue. Then, 3‐μm‐thick sections were cut from the array blocks and deparaffinized and dehydrated. Antigen retrieval was performed by autoclave in 10 mM citric acid buffer (pH 6) at 115°C for 10 min. Immunohistochemistry was performed using a Histostainer 36 A (Nichirei Biosciences). The sections were incubated with anti‐eEF1A1 antibody diluted 1:200, or with anti‐eEF1A2 antibody diluted 1:100, for 30 min at room temperature. These antibodies against eEF1A1 and eEF1A2 were also used for Western blotting. The sections were subsequently incubated with the secondary antibody (Dako REAL EnVision Detection System; Agilent Technologies) and detected with DAB (Dako, DAB + Liquid, Agilent Technologies), and counterstained with hematoxylin. In cases where more than 10% of the tumor cells were stained, we judged the tumor to be “positive,” as reported previously elsewhere. Genomic DNA was extracted from 10‐μm‐thick paraffin embedded sections by digestion with proteinase K (Qiagen), followed by use of a magLEAD (Precision System Science). Oligonucleotide primers for EEF1A2 were designed using Primer3 ( http://primer3.sourceforge.net/ ). GAPDH was used as an internal control gene for normalization. Quantitative PCR analysis was carried out using SYBR Premix Ex Taq (Perfect Real Time; Takara Bio). The PCR reactions were carried out using a StepOnePlus Real‐Time PCR System (Thermo Fisher Scientific) at 95°C for 30 s followed by 40 cycles of 95°C for 5 s and 60°C for 31 s. A ratio (tumor ⁄ normal) of ≥1.5 was defined as representing gene amplification. FISH was performed using the protocol reported previously. The 5‐μm‐thick serial sections from FFPE tissues were subjected to dual‐color FISH using an EEF1A2‐CEN20p probe (GSP Lab.). Using a fluorescence microscope (Keyence) with single interference filter sets for green (FITC), red (TexRed), and blue (DAPI), FISH signals were enumerated in non‐overlapping tumor cell nuclei. For statistical analysis, we used the SPSS Statistics package Version 26 (IBM). Correlations with clinicopathological features were analyzed using the chi‐squared test. Survival curves were calculated using the Kaplan‐Meier method and assessed using the log‐rank test. The endpoint of this study was disease‐free survival. Survival time was from the date of resection to the date of recurrence. The relationship between eEF1A2 amplification by qPCR and immunohistochemical positivity was examined with the Mann‐Whitney test. Univariate and multivariate analyses were carried out using the Cox proportional hazards model. Confirmation of antibody specificity eEF1A2 and its isoform eEF1A1 have very high sequence homology (93%) at the amino acid level. First, we validated the specificity of the antibody. Both antibodies against eEF1A1 and eEF1A2 detected their recombinant proteins specifically and separately on Western blots (Figure ). Each antibody also detected the organ‐specific distribution of each isoform in protein extracts of normal human organs; eEF1A1 was distributed in lung, liver, and kidney, whereas eEF1A2 was distributed in heart and skeletal muscle (Figure ). Lung cancer cell line A549 expressed both eEF1A1 and eEF1A2 (Figure ). After transfection with si‐eEF1A2 on A549, eEF1A2 expression was suppressed but eEF1A1 showed no change (Figure ). eEF1A2 expression in lung adenocarcinoma and clinicopathological features Western blot analysis was conducted on two specimens of surgically resected lung adenocarcinoma. One specimen showed expression of both eEF1A1 and eEF1A2, while the other showed expression of only eEF1A1 (Figure ). Immunohistochemical analysis of normal lung tissue and AIS demonstrated scattered eEF1A2 expression in a few alveolar cells (Figure ), whereas eEF1A1 showed cytoplasmic expression in alveolar and bronchial cells (Figure ), and in inflammatory cells such as macrophages and lymphoid cells. Lung adenocarcinoma showed cytoplasmic expression of both eEF1A1 and eEF1A2 (eEF1A1: Figure , eEF1A2: Figure ). Invasive tumors with mixed histological patterns tend to show similar expression within the tumor, including a lepidic pattern. For example, eEF1A2‐ positive solid adenocarcinoma showed eEF1A2 positivity in both the solid pattern and the lepidic pattern (Figure ). We examined 175 lung adenocarcinomas immunohistochemically. Forty adenocarcinomas were positive for eEF1A2 (Table ). Among them, 39 cases were diagnosed as invasive adenocarcinoma, and the remaining one as MIA (Table ). Histopathologically, among the eEF1A2‐positive adenocarcinomas, 10% (1/10) were MIA, 23% (10/44) were lepidic, 27% (7/26) were papillary, 28% (5/18) were acinar, 47% (15/32) were solid, 25% (1/4) were micropapillary, and 5% (1/20) were invasive mucinous adenocarcinoma. All 20 specimens of AIS were negative for eEF1A2 (Table ). We investigated the correlation between eEF1A2 expression and the clinicopathological features of the patients using chi‐squared test. eEF1A2 expression was significantly associated with sex ( P = 0.001), histology (AIS, MIA vs invasive carcinomas, P = 0.004), pT factor (pTis, pT1 vs others, P = 0.021), pleural invasion (pl0 vs pl1‐3, P = 0.022), vascular invasion ( P = 0.001) and history of smoking ( P < 0.001). EGFR mutation status and ALK oncoprotein expression did not show any significant association with eEF1A2 expression (Supporting Information S1: Table ). In terms of disease‐free survival, Kaplan‐Meier curve analysis indicated that patients with eEF1A2‐positive tumors had a significantly poorer prognosis than those with eEF1A2‐negative tumors (Figure . Log rank test ( P = 0.046)). Univariate Cox analysis demonstrated that sex ( P = 0.001), pathological stage (0,1 vs Others)( P < 0.001), pT factor (pTis, pT1 vs Others)( P < 0.001), lymph node metastasis (pN0 vs pN1) ( P < 0.001), pleural invasion (pl0 vs Others) ( P < 0.001), lymphatic permeation (Ly0 vs Ly1) ( P < 0.001), vascular invasion (V0 vs V1) ( P < 0.001), history of smoking ( P = 0.001) and eEF1A2 (negative vs positive)( P = 0.048) had statistically significant correlations (Table ). In multivariate Cox analysis, pT factor ( P = 0.026), lymph node metastasis ( P = 0.016), lymphatic permeation ( P = 0.015), and vascular invasion ( P = 0.022) remained significant, but eEF1A2 was not selected as an independent prognostic factor (Table ). Quantitative genomic PCR and FISH analysis Five tumors that were immunohistochemically positive for eEF1A2, and two that were immunohistochemically negative for eEF1A2 were subjected to qPCR analysis. Among 5 eEF1A2‐positive tumors, 4 showed increases of genomic eEF1A2 DNA (tumor/normal ratio >1.5) (Figure ). One eEF1A2‐negative tumor showed a mild increase of genomic eEF1A2 DNA (T/N ratio: 1.53) (Figure ). After the qPCR analysis, two specimens, including an eEF1A2‐amplified tumor and a tumor without eEF1A2 amplification were selected and subjected to FISH analysis. This revealed that the eEF1A2‐amplified tumor also showed eEF1A2 amplification (T/N ratio: 8.58), whereas the tumor without eEF1A2 amplification showed no eEF1A2 amplification (T/N ratio :1.373) (Figure ). The eEF1A2/CEN20p ratio was 2.19 (557/254) in the tumor with increased genomic eEF1A2 DNA, whereas it was 1.50 in the other tumor without eEF1A2 amplification. eEF1A2 and its isoform eEF1A1 have very high sequence homology (93%) at the amino acid level. First, we validated the specificity of the antibody. Both antibodies against eEF1A1 and eEF1A2 detected their recombinant proteins specifically and separately on Western blots (Figure ). Each antibody also detected the organ‐specific distribution of each isoform in protein extracts of normal human organs; eEF1A1 was distributed in lung, liver, and kidney, whereas eEF1A2 was distributed in heart and skeletal muscle (Figure ). Lung cancer cell line A549 expressed both eEF1A1 and eEF1A2 (Figure ). After transfection with si‐eEF1A2 on A549, eEF1A2 expression was suppressed but eEF1A1 showed no change (Figure ). Western blot analysis was conducted on two specimens of surgically resected lung adenocarcinoma. One specimen showed expression of both eEF1A1 and eEF1A2, while the other showed expression of only eEF1A1 (Figure ). Immunohistochemical analysis of normal lung tissue and AIS demonstrated scattered eEF1A2 expression in a few alveolar cells (Figure ), whereas eEF1A1 showed cytoplasmic expression in alveolar and bronchial cells (Figure ), and in inflammatory cells such as macrophages and lymphoid cells. Lung adenocarcinoma showed cytoplasmic expression of both eEF1A1 and eEF1A2 (eEF1A1: Figure , eEF1A2: Figure ). Invasive tumors with mixed histological patterns tend to show similar expression within the tumor, including a lepidic pattern. For example, eEF1A2‐ positive solid adenocarcinoma showed eEF1A2 positivity in both the solid pattern and the lepidic pattern (Figure ). We examined 175 lung adenocarcinomas immunohistochemically. Forty adenocarcinomas were positive for eEF1A2 (Table ). Among them, 39 cases were diagnosed as invasive adenocarcinoma, and the remaining one as MIA (Table ). Histopathologically, among the eEF1A2‐positive adenocarcinomas, 10% (1/10) were MIA, 23% (10/44) were lepidic, 27% (7/26) were papillary, 28% (5/18) were acinar, 47% (15/32) were solid, 25% (1/4) were micropapillary, and 5% (1/20) were invasive mucinous adenocarcinoma. All 20 specimens of AIS were negative for eEF1A2 (Table ). We investigated the correlation between eEF1A2 expression and the clinicopathological features of the patients using chi‐squared test. eEF1A2 expression was significantly associated with sex ( P = 0.001), histology (AIS, MIA vs invasive carcinomas, P = 0.004), pT factor (pTis, pT1 vs others, P = 0.021), pleural invasion (pl0 vs pl1‐3, P = 0.022), vascular invasion ( P = 0.001) and history of smoking ( P < 0.001). EGFR mutation status and ALK oncoprotein expression did not show any significant association with eEF1A2 expression (Supporting Information S1: Table ). In terms of disease‐free survival, Kaplan‐Meier curve analysis indicated that patients with eEF1A2‐positive tumors had a significantly poorer prognosis than those with eEF1A2‐negative tumors (Figure . Log rank test ( P = 0.046)). Univariate Cox analysis demonstrated that sex ( P = 0.001), pathological stage (0,1 vs Others)( P < 0.001), pT factor (pTis, pT1 vs Others)( P < 0.001), lymph node metastasis (pN0 vs pN1) ( P < 0.001), pleural invasion (pl0 vs Others) ( P < 0.001), lymphatic permeation (Ly0 vs Ly1) ( P < 0.001), vascular invasion (V0 vs V1) ( P < 0.001), history of smoking ( P = 0.001) and eEF1A2 (negative vs positive)( P = 0.048) had statistically significant correlations (Table ). In multivariate Cox analysis, pT factor ( P = 0.026), lymph node metastasis ( P = 0.016), lymphatic permeation ( P = 0.015), and vascular invasion ( P = 0.022) remained significant, but eEF1A2 was not selected as an independent prognostic factor (Table ). Five tumors that were immunohistochemically positive for eEF1A2, and two that were immunohistochemically negative for eEF1A2 were subjected to qPCR analysis. Among 5 eEF1A2‐positive tumors, 4 showed increases of genomic eEF1A2 DNA (tumor/normal ratio >1.5) (Figure ). One eEF1A2‐negative tumor showed a mild increase of genomic eEF1A2 DNA (T/N ratio: 1.53) (Figure ). After the qPCR analysis, two specimens, including an eEF1A2‐amplified tumor and a tumor without eEF1A2 amplification were selected and subjected to FISH analysis. This revealed that the eEF1A2‐amplified tumor also showed eEF1A2 amplification (T/N ratio: 8.58), whereas the tumor without eEF1A2 amplification showed no eEF1A2 amplification (T/N ratio :1.373) (Figure ). The eEF1A2/CEN20p ratio was 2.19 (557/254) in the tumor with increased genomic eEF1A2 DNA, whereas it was 1.50 in the other tumor without eEF1A2 amplification. In this study we immunohistochemically validated the relatively high expression of eEF1A2 in invasive adenocarcinoma (39/144 cases) relative to MIA (1/10 cases) or AIS (0/21 cases). This confirmed our previous study involving proteome analysis. Although several studies have demonstrated overexpression of eEF1A2 in invasive lung adenocarcinoma, , , , , the present study is the first to have compared the expression of eEF1A2 among AIS, MIA and invasive adenocarcinoma. Among the histologic subtypes, eEF1A2 expression was detected frequently in solid‐ type adenocarcinomas relative to other subtypes (papillary, acinar, micropapillary etc.). Since solid‐type adenocarcinoma is the representative histological subtype of poorly differentiated adenocarcinoma, eEF1A2 expression may be associated with histological differentiation. Furthermore, it was noteworthy that the expression rate in lepidic adenocarcinoma (23%, 10/44) was markedly higher than in AIS and MIA (3%, 1/31). Since AIS and MIA are composed of lepidic‐type adenocarcinoma, this finding suggests that overexpression of eEF1A2 occurs at the lepidic adenocarcinoma stage (well differentiated but invasive adenocarcinoma). The increased expression of eEF1A2 in invasive carcinoma relative to AIS or MIA suggests that eEF1A2 may have a functional role in invasion. In several tumor cell lines, including hepatocellular carcinoma, , mouse plasmacytoma, pancreatic cancer, and breast cancer, eEF1A2 expression activates the Akt pathway and this activation suppresses apoptosis and promotes cell proliferation. In addition to its primary function in peptide chain extension, eEF1A is also known to bind to actin and act on the cytoskeleton. It has been reported that eEF1A2 promotes the formation of cell pseudopodia and enhances cell invasion and migration in breast cancer cell lines. , In lung cancer, it has been reported that eEF1A2 interacts with HSP90AB1 and promotes epithelial‐mesenchymal transition via the TGF‐β/SMAD pathway. Although these studies do not settle the issue of whether eEF1A2 has a direct functional role in lung adenocarcinoma invasion, there is a possibility that these oncogenic properties of eEF1A2 could play some roles in invasion, and this would be an interesting subject for future research. It is interesting to note that in our experiment, while pure AIS specimens did not show eEF1A2 expression, specimens of eEF1A2‐positive invasive adenocarcinoma tended to express eEF1A2 in both the lepidic pattern and in non‐lepidic patterns (Figure ). This finding suggests that eEF1A2 is expressed in an adenocarcinoma subset with invasive capability, but that eEF1A2 expression alone might not be sufficient to confer invasive capability on individual tumor cells. Considering the multistep nature of malignant progression of lung adenocarcinoma, the process of invasion may also be multistep. If so, future research to clarify the role of eEF1A2 in invasion would be a promising avenue. Among the histological subtypes of invasive adenocarcinoma, the predominantly solid subtype showed a higher rate of eEF1A2 positivity (47%) than the other subtypes (5%–28%) (Table ). The solid subtype is known to have a poorer prognosis , , and a closer association with smoking history. , In the present study, a high proportion of patients with eEF1A2‐positive tumors had a history of smoking (91.8% (34/37), Table ), indicating some common background with the solid subtype. Molecular targeted therapy of the solid subtype is still under debate, as while the solid subtype is known to have a lower frequency of EGFR mutations , and a higher prevalence of KRAS mutations, it also includes a higher proportion of “pan‐negative” tumors lacking seven common driver mutations. One study has demonstrated a higher tumor mutational burden in the solid subtype. Although the number of cases we studied was limited (34 cases), there was no significant association between EGFR mutation status and eEF1A2 expression (Supporting Information S1: Table ). The solid subtype might consist of heterogeneous tumor populations, and further study is needed to better clarify the relationship between a solid morphology and eEF1A2 expression. Our disease‐free survival analysis showed that tumors with eEF1A2 overexpression were associated with a significantly poorer outcome (Figure ). We found four papers that had reported on the relationship between eEF1A2 expression and prognosis in lung cancer. Three of them stated that high eEF1A2 expression indicated a poorer prognosis than low eEF1A2 expression, , , while Kawamura et al. found that high eEF1A2 expression was associated with a better prognosis. They performed immunostaining of 50 cases of lung adenocarcinomas and 19 cases of squamous cell carcinoma ranging from Stage I to Stage III using an eEEF1A2 antibody different from the one we used. 82% of the adenocarcinomas were eEF1A2‐positive, which was much higher than the positivity rate in the present study (26%, 40/154, excluded Stage 0) and two others (28%, 64%). , We listed three possible reasons for this difference. First, the antibodies used by Kawamura et al. and ours were different. If the eEF1A2 antibody had reacted with eEF1A1 protein to some extent, the positive rate may have been higher than that of an antibody that reacted specifically with eEF1A2. Secondly, there is a possibility that the immunostaining method had not been optimized and that non‐specific positive images were had been detected. In this study, the antigen retrieval conditions and concentration were adjusted so that positivity was detected only in areas where eEF1A2 was thought to be specifically distributed, such as cardiac muscle, pancreatic islets of Langerhans, and cerebral neurons. Kawamura et al. performed antigen retrieval under high pH conditions, and it is possible that there were nonspecific reactions. Third, the case populations may have led to differences in prognostic analysis. For example, in a study of eEF1A2 expression and prognosis in breast cancer, eEF1A2 was often expressed in the luminal type, and the prognosis of the luminal type was better than that of the basal type, resulting in a better prognosis for eEF1A2‐expressing patients. In the present study, all cases were lung adenocarcinoma, whereas Kawamura et al. used 50 cases of lung adenocarcinoma and 19 cases of squamous cell carcinoma for prognostic analysis. Our qPCR and FISH analyses demonstrated an increase of genomic eEF1A2 DNA in specimens of invasive adenocarcinoma (Figure ). This result was in accord with former studies employing comparative genomic hybridization, which showed an increase of eEF1A2 gene expression in lung adenocarcinoma. , One of those studies found a correlation between eEF1A2 gene amplification and protein overexpression. Our data also suggested a tendency for a simultaneous increase of genomic DNA and protein, although the number of cases was too small to draw a definitive conclusion. Amplification of genomic DNA is thought to be one of the mechanisms of eEF1A2 overexpression in lung adenocarcinoma. Clinically, eEF1A2 can be a therapeutic target for treatment of malignant tumors. Plitidepsin, an inhibitor of eEF1A2, has recently been approved in Australia as a third‐ or fourth‐line treatment for multiple myeloma, , and another, Metarrestine, is being tested in a phase I clinical trial for treatment of metastatic solid tumors in the United States. In conclusion, eEF1A2 shows overexpression during the course of malignant progression of lung adenocarcinoma, and disease‐free survival analysis has revealed that patients with eEF1A2‐overexpressing tumors have a significantly poorer prognosis. One possible reason for the overexpression of eEF1A2 in lung adenocarcinoma may be an increase of genomic eEF1A2 DNA. Mariko Yamato: Conceptualization (equal); investigation (equal); writing—original draft (lead). Tomoko Dai: Conceptualization (equal); investigation (equal); writing—original draft; writing—review and editing (support). Yoshihiko Murata: Investigation (supporting); writing‐original draft (support). Tomoki Nakagawa: Investigation (support). Shinji Kikuchi: Investigation (support). Daisuke Matsubara: Resources (support). Masayuki Noguchi: Conceptualization (equal); project administration (equal); Resources (lead); supervision (lead); writing review and editing (lead). Masayuki Noguchi and Daisuke Matsubara is an Editorial Board member of Pathology International and a co‐author of this article. To minimize bias, they were excluded from all editorial decision‐making related to the acceptance of this article for publication. The remaining authors declare no conflict of interest. Figure S1. Immunohistochemistry for eEF1A1 in normal lung tissue and in lung adenocarcinoma. (a) Normal lung tissue. (b) Tumor tissue with eEF1A1 expression. Figure S2. Immunohistochemistry for eEF1A2, in lepidic component and non‐lepidic component within same tumor specimen. (a) Lepidic component. (b)Non‐lepidic component. Table S1. Clinicopathological features of invasive adenocarcinoma cases subjected to Western blot analysis. Table S2. EGFR mutations, ALK oncoprotein expression, histological subtype (solid or non‐solid) and eEF1A2 expression. Correlation of EEF1A2 expression with each feature was analyzed by chi‐squared test. Table S3. eEF1A2‐specific siRNA. |
Factors influencing open gingival embrasures in orthodontic treatment: a retrospective clinical study | 41670b84-2607-450f-97eb-f83edbd7d1e9 | 11845328 | Dentistry[mh] | Open gingival embrasure space (OGES), also known as the "Black Triangle," refers to the visible triangular gap formed when the interdental gingival papilla cannot completely cover the gingival embrasure space . The presence of OGES not only disrupts the harmonious aesthetic appeal during smiling but may also lead to food impaction , thereby affecting periodontal health . According to research reports, orthodontic treatment can contribute to the occurrence of OGES , with statistics indicating a high incidence rate of 35.4–43.7% among orthodontic patients . With the increasing pursuit of aesthetics and the growing number of orthodontic patients, the open gingival embrasure space in front teeth has garnered increasing attention. Although previous studies have explored the influencing factors of OGES, there is still controversy regarding these factors, possibly due to differences in research sample size, inclusion and exclusion criteria, and measurement items. Most of studies focuses on gender , age , gingival biotype , oral hygiene status , treatment duration , tooth extraction , and crowding , but often ignores the consideration of tooth movement variation during treatment. In particular, there is a lack of reports utilizing cone beam computed tomography (CBCT) technology to conduct in-depth research on the influencing factors related to OGES. The null hypothesis of this study is that there is no correlation between the occurrence of OGES in the central incisor area and factors such as gender, age, duration of treatment, method, tooth extraction, vertical and sagittal skeletal patterns, dentoalveolar height, inclination and movement of central incisors ( P < 0.05, with a study power set at 0.8). Our objective is to test this hypothesis by examining the relationship between these factors and the development of OGES. Sample collection and grouping This study has been approved by the Medical Ethics Committee of West China Hospital of Stomatology (Approval Number: WCHSIRB-D-2024-153), and followed the contents in the Declaration of Helsinki concerning human subjects. We retrospectively collected data from patients who completed treatment at the Orthodontics Department of West China Hospital of Stomatology, Sichuan University, between January 2016 and December 2023. The presence and location of Open Gingival Embrasures (OGES) were descripted in medical records. This study focuses on the analysis of OGES in the central incisor area, with specific inclusion and exclusion criteria as follows: Inclusion criteria: Patients in the permanent dentition stage; Complete information, including intraoral digital photographs before and after orthodontic treatment (including frontal intraoral digital photographs), lateral cephalometric radiographs, and complete medical records; All patients have received oral hygiene education. Exclusion criteria: The presence of OEGS in the central incisor region before treatment; OGES exists between the lateral incisors and canines, but not between the central incisors; The presence of a gap between the maxillary and mandibular central incisors, or the presence of supernumerary teeth between the central incisors before treatment; Congenital missing incisors or extraction of incisors during treatment; Patients who underwent orthognathic treatment or received periodontal surgery during treatment; Patients who underwent secondary orthodontic treatment; Poor oral hygiene, severe gingival bleeding, acute hypertrophic gingivitis in the anterior tooth region, deep overbite, restorations in the central incisor region, or tooth crown defects involving tooth contact points, which affect the judgment of the gingival embrasure space region; Systemic diseases, such as diabetes. After screening, a total of 330 patients met the inclusion criteria. Patients with OGES in the central incisor area before treatment (31 people), those without OGES between the central incisors but with OGES in other areas (5 people), and other patients who did not meet the criteria were excluded, and finally, patients who met the requirements were included. As shown in Fig. , in this study, dentists graded the shape of the gingival papilla in the central incisor area using the Papillary Fill Index (PFI) proposed by Jemt et al. Among them grades 0, 1, and 2 were included in the OGES group, while those with grade 3 were included in the Non-OGES Group (those with grade 4 were excluded). Ultimately, the participants were divided into the OGES group (130 individuals who developed OGES in the central incisor region after treatment but had no OGES before treatment) and the non-OGES group (200 individuals who had no OGES in the central incisor region before and after treatment). Intraoral digital photographs and lateral cephalometric radiographs before and after orthodontic treatment were collected from all patients, and the images were saved in JPG format. Information such as gender, age at the first visit, duration of orthodontic treatment (accurate to the month), tooth extraction, and extraction site was recorded by a doctor. Additionally, for the OGES and non-OGES groups, CBCT data before and after orthodontic treatment were saved for 39 and 33 individuals, respectively. The flowchart of this experiment is shown in Fig. . Landmarking and measuring of lateral cephalometric radiographs A dentist finished cephalometric analysis by an platform [accessible at ( https://www.zhibeicloud.com )]. The system has an average error as low as 0.94 ± 0.74 mm and an average accuracy rate of 89.33% . The landmarks used for cephalometric analysis are shown in Fig. . T1 and T2 represent the measurement items before and after orthodontic treatment respectively. The treatment measurements before and after for U1-SN (°), U1-NA (mm), IMPA, and L1-NB (mm), as well as their changes and absolute values of changes before and after treatment, are represented as △U1-SN (°), △U1-NA (mm), △IMPA, △L1-NB(mm), |△IMPA|, |△L1-NB(mm)| (Note: △ = T2–T1). CBCT marking and measurement All CBCT images were formatted into standard DICOM images and reconstructed into continuous slices of 0.3 mm in thickness. A dentist used the software Mimics Research 21.0 (Materialise NV, Leuven, Belgium) to perform axial adjustments on pre- and post-treatment CBCT scans and to conduct measurements. Position the central incisor in all three spatial planes. The method of orientation adjustment and measurement of the CBCT image is shown in Fig. . By manipulating the sagittal and coronal planes, an optimal view is achieved for measuring the desired parameters. Table lists the names of the measurement items and landmarks. The variations in the distance from the contact point to the alveolar ridge crest, as well as the alterations in the angle formed by the incisal edges of the two central incisors, were calculated both before and after treatment. These changes are denoted as △ICP-ABC and △R (Note: △ represents the difference between T2 and T1). Sample size calculation, quality control and statistical analysis The determination of the sample size in this study was calculated by a specific formula and guided by a previous study . With a Z value of 1.96, which corresponds to a significance level of P = 0.05, and the expected prevalence of OGES being 39.6% as the medium value of the reported range between 35.4 and 43.7% , along with a d value of 0.06 , the calculations indicated that a minimum of 255 participants were required for the study. A dentist randomly selected 20 lateral cephalometric radiographs and CBCTs, and continuously marked and measured them over a period of time, then re-measured them after 2 weeks. Intraclass correlation coefficient (ICC) was used to test the consistency of the results of the same measurement project in two occasions. After the repeatability and consistency of the measurements met the standard (ICC > 0.75), formal measurements began. Statistical analysis was performed using SPSS 25.0 software (IBM Corp, Armonk, N.Y, USA), with a significance level set at P < 0.05. When comparing the differences in categorical variables such as gender, orthodontic treatment method, tooth extraction, sagittal and vertical skeletal patterns between two groups, Pearson’s chi-square test, continuity-corrected chi-square test, or Fisher’s exact probability method were used according to the type of contingency table and the minimum expected frequency. Continuous variables were tested for normality (Kolmogorov–Smirnov test or Shapiro–Wilk test) and homogeneity of variance. If they followed a normal distribution and had equal variance, the two-sample t-test was used. If they followed a normal distribution but had unequal variance, the two-sample t’ test was used. If they did not follow a normal distribution, the Mann–Whitney U test (also known as the Wilcoxon rank-sum test) was used. Variables that were statistically significant in the univariate analysis were extracted, and variables with multicollinearity (VIF > 10) were removed, followed by binary Logistic regression analysis. This study has been approved by the Medical Ethics Committee of West China Hospital of Stomatology (Approval Number: WCHSIRB-D-2024-153), and followed the contents in the Declaration of Helsinki concerning human subjects. We retrospectively collected data from patients who completed treatment at the Orthodontics Department of West China Hospital of Stomatology, Sichuan University, between January 2016 and December 2023. The presence and location of Open Gingival Embrasures (OGES) were descripted in medical records. This study focuses on the analysis of OGES in the central incisor area, with specific inclusion and exclusion criteria as follows: Inclusion criteria: Patients in the permanent dentition stage; Complete information, including intraoral digital photographs before and after orthodontic treatment (including frontal intraoral digital photographs), lateral cephalometric radiographs, and complete medical records; All patients have received oral hygiene education. Exclusion criteria: The presence of OEGS in the central incisor region before treatment; OGES exists between the lateral incisors and canines, but not between the central incisors; The presence of a gap between the maxillary and mandibular central incisors, or the presence of supernumerary teeth between the central incisors before treatment; Congenital missing incisors or extraction of incisors during treatment; Patients who underwent orthognathic treatment or received periodontal surgery during treatment; Patients who underwent secondary orthodontic treatment; Poor oral hygiene, severe gingival bleeding, acute hypertrophic gingivitis in the anterior tooth region, deep overbite, restorations in the central incisor region, or tooth crown defects involving tooth contact points, which affect the judgment of the gingival embrasure space region; Systemic diseases, such as diabetes. After screening, a total of 330 patients met the inclusion criteria. Patients with OGES in the central incisor area before treatment (31 people), those without OGES between the central incisors but with OGES in other areas (5 people), and other patients who did not meet the criteria were excluded, and finally, patients who met the requirements were included. As shown in Fig. , in this study, dentists graded the shape of the gingival papilla in the central incisor area using the Papillary Fill Index (PFI) proposed by Jemt et al. Among them grades 0, 1, and 2 were included in the OGES group, while those with grade 3 were included in the Non-OGES Group (those with grade 4 were excluded). Ultimately, the participants were divided into the OGES group (130 individuals who developed OGES in the central incisor region after treatment but had no OGES before treatment) and the non-OGES group (200 individuals who had no OGES in the central incisor region before and after treatment). Intraoral digital photographs and lateral cephalometric radiographs before and after orthodontic treatment were collected from all patients, and the images were saved in JPG format. Information such as gender, age at the first visit, duration of orthodontic treatment (accurate to the month), tooth extraction, and extraction site was recorded by a doctor. Additionally, for the OGES and non-OGES groups, CBCT data before and after orthodontic treatment were saved for 39 and 33 individuals, respectively. The flowchart of this experiment is shown in Fig. . A dentist finished cephalometric analysis by an platform [accessible at ( https://www.zhibeicloud.com )]. The system has an average error as low as 0.94 ± 0.74 mm and an average accuracy rate of 89.33% . The landmarks used for cephalometric analysis are shown in Fig. . T1 and T2 represent the measurement items before and after orthodontic treatment respectively. The treatment measurements before and after for U1-SN (°), U1-NA (mm), IMPA, and L1-NB (mm), as well as their changes and absolute values of changes before and after treatment, are represented as △U1-SN (°), △U1-NA (mm), △IMPA, △L1-NB(mm), |△IMPA|, |△L1-NB(mm)| (Note: △ = T2–T1). All CBCT images were formatted into standard DICOM images and reconstructed into continuous slices of 0.3 mm in thickness. A dentist used the software Mimics Research 21.0 (Materialise NV, Leuven, Belgium) to perform axial adjustments on pre- and post-treatment CBCT scans and to conduct measurements. Position the central incisor in all three spatial planes. The method of orientation adjustment and measurement of the CBCT image is shown in Fig. . By manipulating the sagittal and coronal planes, an optimal view is achieved for measuring the desired parameters. Table lists the names of the measurement items and landmarks. The variations in the distance from the contact point to the alveolar ridge crest, as well as the alterations in the angle formed by the incisal edges of the two central incisors, were calculated both before and after treatment. These changes are denoted as △ICP-ABC and △R (Note: △ represents the difference between T2 and T1). The determination of the sample size in this study was calculated by a specific formula and guided by a previous study . With a Z value of 1.96, which corresponds to a significance level of P = 0.05, and the expected prevalence of OGES being 39.6% as the medium value of the reported range between 35.4 and 43.7% , along with a d value of 0.06 , the calculations indicated that a minimum of 255 participants were required for the study. A dentist randomly selected 20 lateral cephalometric radiographs and CBCTs, and continuously marked and measured them over a period of time, then re-measured them after 2 weeks. Intraclass correlation coefficient (ICC) was used to test the consistency of the results of the same measurement project in two occasions. After the repeatability and consistency of the measurements met the standard (ICC > 0.75), formal measurements began. Statistical analysis was performed using SPSS 25.0 software (IBM Corp, Armonk, N.Y, USA), with a significance level set at P < 0.05. When comparing the differences in categorical variables such as gender, orthodontic treatment method, tooth extraction, sagittal and vertical skeletal patterns between two groups, Pearson’s chi-square test, continuity-corrected chi-square test, or Fisher’s exact probability method were used according to the type of contingency table and the minimum expected frequency. Continuous variables were tested for normality (Kolmogorov–Smirnov test or Shapiro–Wilk test) and homogeneity of variance. If they followed a normal distribution and had equal variance, the two-sample t-test was used. If they followed a normal distribution but had unequal variance, the two-sample t’ test was used. If they did not follow a normal distribution, the Mann–Whitney U test (also known as the Wilcoxon rank-sum test) was used. Variables that were statistically significant in the univariate analysis were extracted, and variables with multicollinearity (VIF > 10) were removed, followed by binary Logistic regression analysis. Consistency test results and characteristics of the study groups Intraclass correlation coefficients (ICCs) from 0.897 to 0.990 ( P < 0.01) confirmed high repeatability and consistency in the two measurements. Of the 330 patients, 130 were in the OGES group and 200 in the Non-OGES group. OGES was present in both maxillary and mandibular incisor areas in 32 (9.67%), only in the maxillary incisor area in 14 (4.23%), and only in the mandibular incisor area in 84 (25.38%). Demographic details and OGES distribution are in Table . Overview and clinical characteristics of the study population Based on the results of the chi-square test, we found a correlation between gender and the occurrence of orthodontic gingival recession (OGES) in the maxillary and mandibular central incisor regions ( P < 0.05). Specifically, female patients exhibited a higher proportion of OGES. However, there was no significant correlation between the treatment methods, as well as tooth extraction, with the occurrence of OGES ( P > 0.05) (Table ). Meanwhile, we discovered a significant association between the initial age at diagnosis, treatment duration, and the occurrence of OGES in the maxillary and mandibular central incisors. The older the initial consultation age and the longer the treatment duration, the more likely OGES is to occur. Comparison of cephalometric measurements in the OGES and Non-OGES groups Participants were classified by ANB and MP-SN (Table ). Chi-square test analysis revealed no statistically significant association between sagittal/vertical skeletal pattern and the occurrence of OGES ( P > 0.05). U1-SN(T1), U1-SN(T2), U1-NA distance(T2), ΔU1-SN, ΔU1-NA correlated significantly with OGES in maxillary central incisors ( P < 0.05), with smaller values indicating higher OGES likelihood. Additionally, a smaller ΔL1-NB distance indicated a greater possibility of OGES. No statistically significant differences were found for the remaining measurement indicators (as shown in Table ). Comparison of CBCT between the OGES and Non-OGES groups After measuring and analyzing the CBCT data of the OGES group and the Non-OGES Group, we found that in the maxillary central incisor region, the distances between ABC and ICP, △ICP-ABC, and A-P overlap before and after treatment in the OGES group were significantly greater than those in the Non-OGES Group ( P < 0.05). The remaining measurement indicators, including TD, △R, root angulation, and crown width-to-height ratio, showed no significant correlation (Table ). In the mandibular central incisor region, the distances between ABC and ICP, and △ICP-ABC before and after treatment in the OGES group were also significantly greater than those in the Non-OGES Group ( P < 0.05). Multivariate analysis of correlated indicators Due to differences and sample size limitations in CBCT measurements compared to the study population and lateral skull X-ray film measurements, this experiment focuses on multivariate analysis of demographic, clinical characteristics, and cephalometric indicators. Binary logistic regression identified initial consultation age, treatment duration, and △U1-SN as significant factors affecting OGES occurrence in the maxillary central incisor area, with older age, longer treatment, and smaller △U1-SN correlating with greater retraction and higher OGES likelihood (Table ). Similarly, initial consultation age and treatment duration were significantly associated with OGES in the mandibular central incisor area. Intraclass correlation coefficients (ICCs) from 0.897 to 0.990 ( P < 0.01) confirmed high repeatability and consistency in the two measurements. Of the 330 patients, 130 were in the OGES group and 200 in the Non-OGES group. OGES was present in both maxillary and mandibular incisor areas in 32 (9.67%), only in the maxillary incisor area in 14 (4.23%), and only in the mandibular incisor area in 84 (25.38%). Demographic details and OGES distribution are in Table . Based on the results of the chi-square test, we found a correlation between gender and the occurrence of orthodontic gingival recession (OGES) in the maxillary and mandibular central incisor regions ( P < 0.05). Specifically, female patients exhibited a higher proportion of OGES. However, there was no significant correlation between the treatment methods, as well as tooth extraction, with the occurrence of OGES ( P > 0.05) (Table ). Meanwhile, we discovered a significant association between the initial age at diagnosis, treatment duration, and the occurrence of OGES in the maxillary and mandibular central incisors. The older the initial consultation age and the longer the treatment duration, the more likely OGES is to occur. Participants were classified by ANB and MP-SN (Table ). Chi-square test analysis revealed no statistically significant association between sagittal/vertical skeletal pattern and the occurrence of OGES ( P > 0.05). U1-SN(T1), U1-SN(T2), U1-NA distance(T2), ΔU1-SN, ΔU1-NA correlated significantly with OGES in maxillary central incisors ( P < 0.05), with smaller values indicating higher OGES likelihood. Additionally, a smaller ΔL1-NB distance indicated a greater possibility of OGES. No statistically significant differences were found for the remaining measurement indicators (as shown in Table ). After measuring and analyzing the CBCT data of the OGES group and the Non-OGES Group, we found that in the maxillary central incisor region, the distances between ABC and ICP, △ICP-ABC, and A-P overlap before and after treatment in the OGES group were significantly greater than those in the Non-OGES Group ( P < 0.05). The remaining measurement indicators, including TD, △R, root angulation, and crown width-to-height ratio, showed no significant correlation (Table ). In the mandibular central incisor region, the distances between ABC and ICP, and △ICP-ABC before and after treatment in the OGES group were also significantly greater than those in the Non-OGES Group ( P < 0.05). Due to differences and sample size limitations in CBCT measurements compared to the study population and lateral skull X-ray film measurements, this experiment focuses on multivariate analysis of demographic, clinical characteristics, and cephalometric indicators. Binary logistic regression identified initial consultation age, treatment duration, and △U1-SN as significant factors affecting OGES occurrence in the maxillary central incisor area, with older age, longer treatment, and smaller △U1-SN correlating with greater retraction and higher OGES likelihood (Table ). Similarly, initial consultation age and treatment duration were significantly associated with OGES in the mandibular central incisor area. Previous studies have reported risk factors including gender , age , gingival biotype , oral hygiene status , treatment duration , tooth extraction , and crowding . However, there is still controversy surrounding the perspectives. In our analysis, females are more prone to OGES, which may be associated with gender differences in gingival biology and hormonal levels . However, is not consistent across all research and may be influenced by the sample size and the interplay of multiple factors. Age also emerged as a significant factor, with an increase in the initial diagnosis age of patients correlating to a higher risk of OGES, aligning with the majority of existing research . Furthermore, the duration of treatment is significantly correlated with the occurrence of OGES, consistent with previous research findings . The sagittal movement of anterior teeth, especially the lingual movement of the maxillary central incisor, was found to significantly correlate with OGES occurrence and supported by multivariate analysis. In our study, we found no evidence to suggest that labial movement of anterior teeth leads to the occurrence of OGES, which is consistent with the findings of Vasconcelos et al. and An et al. . The inclination of the anterior teeth was found to have a correlation with OGES, with a more lingually inclined position of both the initial and final angles of the maxillary central incisors correlating to a higher likelihood of OGES. In our study, the mean value of U1-SN after treatment for the non-OGES group of maxillary central incisors was 105.31 degrees, which was significantly greater than that of the OGES group at 99.07 degrees. According to Andrews’ Six Elements, teeth that are upright in the center of the alveolar bone are crucial for achieving long-term stable treatment outcomes . In contrast to the maxillary central incisors, we observed no statistically significant correlation between the lingual inclination angle of the mandibular central incisors and the occurrence of OGES. This finding might be attributed to the thinner gingival tissue surrounding the mandibular central incisors, which could predispose them to a higher incidence of OGES. Currently, there is a scarcity of research examining the link between alveolar bone height and OGES using CBCT. In our study, we discovered that the distance from the contact point of the maxillary central incisor to the alveolar ridge crest was greater in patients who experienced OGES. This suggests a potential association between alveolar bone height and the likelihood of OGES, a finding that echoes previous research conclusions and confirmed the research by Tarnow et al. . The relationship between tooth crowding and the development of OGES is a matter of debate . We assessed the degree of crowding by measuring specific distances and angles of the central incisors, finding a significant association only with the A-P distance of the upper central incisors and OGES occurrence in univariate analysis. Additionally, other factors such as treatment method, extraction of teeth, sagittal and vertical skeletal patterns, root angulation, and crown morphology were compared, but no significant statistical differences were found between the groups. This is consistent with some studies but not with others . We speculate that differences in the number and characteristics of the patients included may account for the divergence in conclusions between our study and previous ones. It is acknowledged that there are inherent limitations within the scope of this study. Initially, a number of factors were not taken into account, such as gingival biotype, oral hygiene status, and the practice of interproximal reduction (IPR). Furthermore, the sample size of CBCT scans was constrained, as these were not deemed a clinical necessity for every patient evaluated. In the pursuit of further research, it is recommended to incorporate sophisticated methodologies, such as machine learning, on a larger sample size. Additionally, the conduct of prospective studies may be warranted to enhance our comprehension of the risk factors associated with OGES. In conclusion, our study’s univariate analysis indicated that the occurrence of OGES in the upper central incisors is significantly associated with initial consultation age, treatment duration, initial and final angular positions and changes of the anterior teeth, and alveolar bone height. For the lower central incisors, similar factors were identified. Binary Logistic regression analysis confirmed that initial consultation age and treatment duration are independent influencing factors for the occurrence of OGES. Therefore, in orthodontic treatment, it is crucial to consider these various factors comprehensively to prevent or reduce the occurrence of OGES, ensuring a balance between facial aesthetics and smile aesthetics. |
Temporal dynamics of soil microbial C and N cycles with GHG fluxes in the transition from tropical peatland forest to oil palm plantation | 7da3f27a-01dd-488e-96cf-0d939b054e9f | 11784229 | Microbiology[mh] | Peatlands, which cover approximately 3% of the Earth’s land mass—around 423 million hectares—extend from the tropics to the Arctic . Peatlands store a third of the world’s soil carbon and a tenth of its soil nitrogen despite covering a relatively small terrestrial area . Tropical peatlands, estimated at 44–170 million hectares, are critical carbon and nitrogen reservoirs, containing approximately 20% of global soil carbon and 6% of global soil nitrogen . These peatlands are primarily located in coastal areas or inland basins of Southeast Asia, Central Africa, and South America . Conserving tropical peatlands can contribute to climate change mitigation by safeguarding their natural carbon storage capabilities . However, socioeconomic pressures have led to significant changes in land use in these ecosystems, including logging, pulpwood planting, agriculture, and construction . Such disturbances have led to significant emissions of greenhouse gases (GHGs) such as carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O), which vary depending on the extent and type of disturbance . The conversion of tropical peatlands to agricultural land has profound environmental consequences . Factors, such as temperature, groundwater level, peat humification, and soil nutrient levels, could influence GHG fluxes, but their impact may vary, particularly with agricultural development that alters groundwater levels to meet crop needs . Changes in aboveground vegetation can affect soil microclimate, peat formation, and physicochemical properties . Drainage, commonly used to lower groundwater levels in tropical peatlands, has been linked to increased soil CO 2 levels and shifts in microbial communities . Given the large carbon stock of tropical peatlands, GHG emissions driven by microbial communities in response to land use changes are concerning . Identifying and understanding variations in GHG emissions associated with different land uses is crucial, especially considering the high global warming potential of CH 4 and N 2 O . Soil microorganisms, particularly prokaryotes, drive the production and consumption of GHGs through carbon and nitrogen compound transformation . Pressure from land use changes can shift microbial communities, affecting peat formation, carbon turnover, and nutrient mineralization that support ecosystem sustainability . Therefore, a comprehensive understanding of microbial communities and their functional properties is key to infer ecosystem responses to land use change, including microbial contributions to GHG emissions . Methane, a potent GHG, can be produced anaerobically by methanogens through methanogenesis from CO 2 , methanol, methylamines, methyl sulfides, and acetate . Conversely, CH 4 is converted back to CO 2 by reverse methanogenesis and through CH 4 oxidation by aerobic methanotrophic bacteria and anaerobic methanotrophic archaea or bacteria . Methanogens and methanotrophs also play roles in the nitrogen cycle. Methanogenesis coupled with nitrogen fixation to facilitate the input of nitrogen compounds into anoxic soils . Anaerobic methanotrophic bacteria, “ Candidatus Methylomirabilis,” oxidize CH 4 in combination with nitrite reduction, while methanotrophic archaea, Methanoperedens nitroreducens, couple anaerobic CH 4 oxidation with nitrate reduction . Land use changes that affect soil hydrology, such as the drainage of waterlogged soil, can inhibit methanogenesis and complete denitrification that thrives in anaerobic and lower redox potential conditions of deep peat layers . Nitrification and incomplete denitrification can be enhanced in partially aerated soils, leading to higher N 2 O production . Furthermore, the close link between microbial carbon and nitrogen cycles and GHG fluxes emphasizes the importance of studying these processes together, particularly in relation to plant interactions . Soil harbors a variety of microorganisms that are involved in the nitrogen cycle . Another potent GHG, N 2 O, is produced through nitrification, denitrification, and dissimilatory nitrate reduction of ammonia (DNRA), while N 2 O consumption is mediated by microorganisms that possess N 2 O reductase enzymes . Anthropogenic activities, especially increased nitrogen inputs from fertilizer runoff, can exacerbate N 2 O emissions in plantations . In addition, peat characteristics and vegetation influence microorganisms and biogeochemical processes . There are some studies investigating microbial metabolic pathways in tropical peatland ecosystems in Southeast Asia . These microbial communities were primarily analyzed by amplicon sequencing . Nonetheless, our understanding of the temporal changes in the microbiome and the associated genes encoding enzymes that regulate CH 4 and N 2 O fluxes in tropical peatlands is limited. In particular, there are no studies on the temporal composition of the microbiome in tropical peatlands under different land uses in Sarawak, Borneo, the world’s third-largest island in Southeast Asia. Our study presents data collected from 2016 to 2020, documenting temporal changes in land use at the same location in the tropics. We tracked the transition from secondary peat swamp forest through land preparation to the early stages of oil palm plantation . This approach mitigates the typical spatial variability observed across sites in microbiome studies. The study site in Sri Aman, Sarawak was originally a forest dominated by Shorea albida trees, which were selectively logged until commercial logging licenses were terminated in the 1980s. This cessation allowed the area to develop naturally into a secondary peat swamp forest characterized by Litsea spp. trees . Land preparation began in April 2017 with the construction of drains to reduce the groundwater table and subsequent land clearing . By April 2018, the oil palm plantation was established with the planting of 1-year-old oil palm seedlings . Continuous sampling of the same site provided unique data sets on the soil microbiome and GHG measurements across different land use stages in tropical peatland. The aims of this study are as follows: (i) to discover temporal changes in the microbial community during land use change, (ii) to uncover changes in microbial carbon and nitrogen cycles governing CH 4 and N 2 O emissions due to land use change, and (iii) to reveal the greenhouse gas potential based on the microbiome and emissions during the transition of land use change in tropical peatland. We hypothesized that differences in GHG emissions are related to microbiome composition and functional gene abundances, and soil properties play a role in regulating genes associated with CH 4 and N 2 O production and consumption. Groundwater table, environmental variables, and peat chemical properties Groundwater levels in the secondary peat swamp forest fluctuated between –11.1 and 8.5 cm in relation to the peat surface . The negative value signifies that the water levels were below the peat surface. After the construction of artificial canals and drainage systems in the land preparation, the groundwater level dropped to –104 cm . In the oil palm plantation, the water level was maintained within –50 cm . These changes in the groundwater table due to land use change and seasonal variations were significantly different ( P < 0.05; ). Soil moisture content was the highest in the forest and averaged above 50% in the oil palm plantation . Soil and air temperatures were lower in the forest and increased in the land preparation and oil palm plantation . Relative humidity was above 80% in the forest and decreased as land use changed to an oil palm plantation . These changes in environmental variables were significantly different due to land use change ( P < 0.05; ). The conversion of forest to plantation influenced the humification level, total carbon, and concentrations of ammonium, nitrate, and phosphate, despite minimal change in pH values . The pyrophosphate solubility index (PSI), which measures the degree of humification of the peat, was lowest in the forest, followed by land preparation and oil palm plantation. A higher PSI value indicates a higher humification level . Total carbon was slightly higher in the oil palm plantation than in the forest. The inorganic nitrogen pool in the forest ecosystem was primarily composed of ammonium. The trend reversed, with nitrate levels peaking as the site transitioned to land preparation and then decreasing in the oil palm plantation. In addition, soil phosphate concentrations were lower in the oil palm plantation than in the land preparation and the forest. No seasonal variation was found in the chemical properties based on the current samples ( ; P > 0.05). In addition, the peat chemical properties differ based on sampling depth . The PSI, total carbon, and C:N ratios were higher in the deeper peat layer (25–50 cm below the surface; ). In contrast, total nitrogen and phosphate were higher in the top peat layer (0–25 cm below the surface; ). Soil greenhouse gases Soil CO 2 , CH 4 , and N 2 O fluxes varied across the different land uses . The CO 2 emissions increased significantly due to land use ( P = 0.008) and seasonal variation ( P = 0.020), with the highest value recorded during the wet season in the oil palm plantation . Although not statistically significant, the highest CH 4 emissions were observed in the wet season when the secondary peat swamp forest was waterlogged . The site remained a net CH 4 source during the land preparation and the oil palm plantation phases. The oil palm plantation N 2 O emissions were significantly higher than the secondary forest and land preparation . The N 2 O emissions also showed seasonal variations, with the highest values occurring during the wet season in the oil palm plantation. Initially, the secondary peat swamp forest acted as a net N 2 O sink. However, when land clearing activities began, the site became a net N 2 O source, specifically as the site converted into an oil palm plantation. Prokaryotic communities structure and diversity in tropical peatland The coverage and characteristics of the shotgun metagenomic sequencing data are detailed in . The tropical peatland microbiota constituted mainly Bacteria (69%–79%), followed by Eukaryota (20%–29%), mainly Arthropoda , Chordata , and Streptophyta . Fungal DNA accounted for 8%–9% of the eukaryotic DNA, dominated by Ascomycota and Basidiomycota . Archaea constituted less than 3% of the total reads, while viruses, represented by Uroviricota , Taleaviricota , and Artverviricota , contributed 0.4%. Coverage estimates and Nonpareil sequence diversity revealed that the diversity of the microbiome was lower in the oil palm plantation compared to the forest . However, the microbiota showed similarities in the forest, land preparation, and plantation, with Proteobacteria , Actinobacteria , and Acidobacteria dominating . Land use change significantly influenced the relative abundance of Verrucomicrobia ( P = 0.012), while seasonal variation shaped Planctomyces populations ( P = 0.014). In addition, peat depth played a role in the relative abundance of Proteobacteria , Actinobacteria , Firmicutes , Bacteroidetes , Cyanobacteria , and Chloroflexi . The prokaryotic communities were dominated by the relatively abundant families from the phyla Proteobacteria , Actinobacteria , and Acidobacteria . In particular, the transition from forest to plantation led to a decrease in the relative abundance of the families Bradyrhizobiaceae , Mycobacteriaceae , and Streptomycetaceae . The family Acidobacteriaceae , which dominates within Acidobacteria , remained predominant and was unaffected by peat depth. The archaeal community was dominated by members of the phyla Euryarchaeota , “ Candidatus Thermoplasmatota” and Thaumarchaeota , with significant differences in relative abundance due to land use change . Several metanogenic taxa were identified, including Methanobacteriales , Methanocellales , Methanococcales , Methanoliparales , Methanomassilicoccales , Methanomicrobiales , Methanonatronarchaeales , and Methanosarcinales . Furthermore, the Thaumarchaeota also includes ammonia-oxidizing archaea (AOA) belonging to the class Nitrososphaeria . Prokaryotic community variation in response to abiotic factors The prokaryotic communities differed in response to land use change . Microbial community composition was more similar between forest and land preparation samples, with the greatest differences observed between the forest and plantation phases . The ordination plot showed that prokaryotic communities transitioning more recently retained similarities to the initial land use. The low groundwater levels in the plantation potentially influenced humification levels, as shown by the higher PSI values. Changes in microbial composition were also associated with variations in the C:N ratio, total nitrogen, and N 2 O emissions. Through Mantel analyses, humification level showed the strongest correlation with prokaryotic diversity . Other factors, including concentrations of ammonium and phosphate, groundwater level, pH, and C:N ratio, also showed a significant correlation with prokaryote diversity . CAZymes analysis The assembly statistics and the functional classification of the Cluster of Orthologous (COGs) are provided in , respectively. The analysis of CAZymes highlighted the differences in carbohydrate processing, particularly in the decomposition of plant litter, which involves the breakdown of lignin, cellulose, and hemicellulose by different enzymes. In , the CAZymes heatmap is categorized by selected glycoside hydrolase (GH) and auxiliary activity (AA) families. Amylolytic enzymes from the GH families 13, 15, 57, and 97, which act on glucosidic bonds in starch or short oligosaccharides, were identified. Furthermore, cellulolytic enzymes from GH6 and GH148, including hemicellulosic enzymes from GH43, GH67, and GH113, were detected. These enzymes were present in the expected range in the upper layer of forest and land preparation peat samples; however, they were particularly scarce in the lower layer. Specific GH groups that include a combination of cellulolytic and hemicellulosic enzymes, such as GH1, GH2, GH3, GH5, and others, were distinguished on the heatmap . The analysis also revealed the presence of AA families targeting lignin. Hemicellulolytic and ligninolytic enzymes acting on recalcitrant peat components were widespread in disturbed peat (land preparation and oil palm plantation). However, the plantation samples cluster separately from the forest and land preparation. Notably, sequences associated with GH1 and GH5 are also prevalent in the plantation samples. Microbial methane-cycling functional genes Microbial methane-cycling functional genes were analyzed primarily through gene-based approaches, as detailed in the Materials and Methods, with metagenome-assembled genomes (MAGs) employed selectively to complement these analyses. The heatmap and cluster analysis of GHGs production and consumption revealed distinct patterns across land uses . Plantation samples clustered separately. In contrast, forest and land preparation samples exhibited comparable functional potential profiles. The mcrA gene abundance was higher in waterlogged forests , with major methanogens identified were Methanocellales and Methanosarcinales . We also recovered MAGs encoding mcrA , assigned to the phylum Halobacteriota (GTDB classification), linked to Methanocellales and Methanosarcinales , thereby reinforcing and supporting the gene-centric analyses. In addition, genes related to CH 4 production ( mcrABC ) showed significant correlations with PSI, C:N ratio, and CO 2 fluxes . Non-methanogenic CH 4 production through phosphonate ( phnJ ) demethylation was also observed across land uses . The α-subunit copper-containing membrane-bound particulate CH 4 monooxygenase, encoded by the pmoA gene, was the predominant methanotrophic trait observed in our samples. Methanotrophic Alphaproteobacteria , particularly genera such as Methylocystis , Methylosinus , and Bradyrhizobium , were identified in all land uses and potentially CH 4 regulators in tropical peatlands . Importantly, the binning of Methylocystis MAGs provided further evidence supporting CH 4 oxidation capabilities, underscoring the findings from our gene-centric analyses. In addition, pmoA genes were found in Gammaproteobacteria (mainly Methylococcales ) and Verrucomicrobia . Genes involved in CH 4 oxidation ( pmoABC ) were correlated with groundwater level, CH 4 fluxes, total nitrogen, C:N ratio, ammonium, and phosphate levels . For the α-subunit iron-containing cytoplasmic soluble CH 4 monooxygenase, encoded by the mmoX genes, was found in Alphaproteobacteria , Gammaproteobacteria , and Actinobacteria . The mmoX genes showed weak correlations with total nitrogen, C:N ratio, and concentration of ammonium and phosphate . This study also highlighted the putative role of anaerobic oxidation of CH 4 by archaea and bacteria, as these taxa were present in the 50 cm peat depth. Potential anaerobic CH 4 oxidizers were the anaerobic methanotrophic archaea (ANME) members (“ Candidatus Methanohalobium,” Methanoperedenans ) and NC10 methanotrophic bacteria (“ Candidatus Methylomirabilis”). Only Methylomirabilota MAGs were recovered for the data sets, while MAGs for ANME members, such as “ Candidatus Methanohalobium” and Methanoperedenans , were not detected. Microbial nitrogen-cycling functional genes The gene-centric analyses identified sequences that may play key roles in nitrification, primarily driven by AOA and ammonia-oxidizing bacteria (AOB), integral to the nitrogen cycle. The dominant AOA was Thaumarchaeota (class Nitrosophaeria ), detected and found across all land uses, while most AOB were Proteobacteria . The amoA genes exceeded the nitrite reductase ( nir ) genes abundance in the later stage of the land preparation, suggesting conditions that favor ammonium-oxidizing microorganisms . Ammonium oxidation genes were correlated to groundwater levels and CH 4 fluxes . Nitrification also likely contributes to N 2 O production in the oil palm plantation, as evidenced by a threefold increase in the amoA to nosZ ratio . Furthermore, Nitrosotalea MAGs encoding the amoABC were recovered, suggesting their possible role in nitrification and their significance in the nitrogen cycle. In the oil palm plantation, the nirK genes exceeded nirS count by 3–13-fold . Across different land uses, the relative abundance of nirK genes was higher than nirS , with nirS exceeding nirK gene abundance only in the top layer January 2016 forest sample, indicating dominant nirK -type denitrification in tropical peatlands . The major genera of nirK -type denitrifiers were Bradyrhizobium , Pseudomonas , Mesorhizobium , Rhizobium , Rhodopseudomonas , Bosea , Paracoccus , Achromobacter , Sinorhizobium , and Ensifer . Mantel analyses of nir genes showed correlations with groundwater levels, CO 2 , and N 2 O fluxes . The direct source of N 2 O, facilitated by norB , showed a higher gene abundance in deeper peat layers. The norB- encoding prokaryotes belong to members of Proteobacteria ( Burkholderia and Ralstonia ), Acidobacteria ( Terriglobus and Paludibaculum ), Actinobacteria ( Nonomuraea ), and Planctomycetes ( Gemmataceae ; ). In addition, Bacteroidetes , Chlamydiae , Chloroflexi , Cyanobacteria , Gemmatimonadetes , Nitrospirae , Spirochaetes , and Verrucomicrobia also encoded norB , indicating broad microbial contribution to N 2 O production across land uses. Most MAGs recovered encoding norB genes were affiliated with Pseudomonadota and Acidobacteriota . Correlation analyses of norB genes indicated a weak correlation with total nitrogen and the C:N ratio . The nosZ gene, which mediates the final step of denitrification, is the only known biological process that converts N 2 O to N 2 . Taxa encoding complete denitrification genes, such as Magnetospirilium , Ralstonia , Burkholderia , Paraburkholderia , Dyella , and Terriglobia were more abundance during the land preparation and in the oil palm plantation . Predominant taxa encoding the nosZ gene include members of the phyla Proteobacteria and Acidobacteria . In addition, the relative abundance of Bradyrhizobium and Methylocystis decreased as the forest transitioned to oil palm plantation . The ratio of nitrite reducers (sum of nirK and nirS ) to N 2 O reducers ( nosZ ) was the highest in January 2020 of the oil palm plantation sample, increasing from 7- to 11-fold . The abundance of norB genes was three times higher than that of nosZ . The main N 2 O producers are possibly the nirK -type denitrifiers with gene abundance exceeding nirS -type denitrifiers. This coincides with the increase in N 2 O fluxes observed in January 2020 . The fermentative DNRA ( nirB ) genes were more relatively abundant than the respiratory DNRA ( nrfA ). Members of Proteobacteria (order Burkholderiales , Caulobacterales , Hyphomicrobiales , Lactobacillales , Methylococcales , Nevkiales , Pseudomonadales , Rhodospirilales , and Xanthomonadales ) encoded nirB -mediated DNRA. The nrfA -mediated DNRA was dominated by the class Terriglobia ( Acidobacteria ), with lower occurrence in the plantation samples. The DNRA pathway produced N 2 O as a by-product, and the lower abundance of DNRA genes in the oil palm plantation suggests a minor contribution to N 2 O emissions. Metagenomic analysis also revealed a widespread presence of nitrogen fixation ( nif ) genes in archaea and bacteria. Alphaproteobacteria was the dominant nitrogen fixers in tropical peatlands, with minor contributions by Beta- , Delta- , and Gammaproteobacteria . The nif genes were also detected in Acidobacteria , Actinobacteria , Bacteroidetes , Chlorobi , Chloroflexi , Nitrospirae , Planctomycetes , and Verrucomicrobia , indicating broad community participation in nitrogen-fixing. The major taxonomic families of diazotrophs are shown in , with Bradyrhizobiaceae decreasing when the site transitioned to an oil palm plantation. Overall, the forest samples from January and August 2016 had a higher abundance of nifH genes than the plantation (January and August 2020; ). In addition, nifH genes were detected in methanogens (i.e., Methanomicrobium , Methanothrix , Methanocella , Methanoregula , Methanosarcina , Methanolinea , and Methanolobus ) and ANME members (“ Candidatus Methanoperedens”), suggesting a possible coupling of nitrogen fixation to CH 4 metabolism. Groundwater levels in the secondary peat swamp forest fluctuated between –11.1 and 8.5 cm in relation to the peat surface . The negative value signifies that the water levels were below the peat surface. After the construction of artificial canals and drainage systems in the land preparation, the groundwater level dropped to –104 cm . In the oil palm plantation, the water level was maintained within –50 cm . These changes in the groundwater table due to land use change and seasonal variations were significantly different ( P < 0.05; ). Soil moisture content was the highest in the forest and averaged above 50% in the oil palm plantation . Soil and air temperatures were lower in the forest and increased in the land preparation and oil palm plantation . Relative humidity was above 80% in the forest and decreased as land use changed to an oil palm plantation . These changes in environmental variables were significantly different due to land use change ( P < 0.05; ). The conversion of forest to plantation influenced the humification level, total carbon, and concentrations of ammonium, nitrate, and phosphate, despite minimal change in pH values . The pyrophosphate solubility index (PSI), which measures the degree of humification of the peat, was lowest in the forest, followed by land preparation and oil palm plantation. A higher PSI value indicates a higher humification level . Total carbon was slightly higher in the oil palm plantation than in the forest. The inorganic nitrogen pool in the forest ecosystem was primarily composed of ammonium. The trend reversed, with nitrate levels peaking as the site transitioned to land preparation and then decreasing in the oil palm plantation. In addition, soil phosphate concentrations were lower in the oil palm plantation than in the land preparation and the forest. No seasonal variation was found in the chemical properties based on the current samples ( ; P > 0.05). In addition, the peat chemical properties differ based on sampling depth . The PSI, total carbon, and C:N ratios were higher in the deeper peat layer (25–50 cm below the surface; ). In contrast, total nitrogen and phosphate were higher in the top peat layer (0–25 cm below the surface; ). Soil CO 2 , CH 4 , and N 2 O fluxes varied across the different land uses . The CO 2 emissions increased significantly due to land use ( P = 0.008) and seasonal variation ( P = 0.020), with the highest value recorded during the wet season in the oil palm plantation . Although not statistically significant, the highest CH 4 emissions were observed in the wet season when the secondary peat swamp forest was waterlogged . The site remained a net CH 4 source during the land preparation and the oil palm plantation phases. The oil palm plantation N 2 O emissions were significantly higher than the secondary forest and land preparation . The N 2 O emissions also showed seasonal variations, with the highest values occurring during the wet season in the oil palm plantation. Initially, the secondary peat swamp forest acted as a net N 2 O sink. However, when land clearing activities began, the site became a net N 2 O source, specifically as the site converted into an oil palm plantation. The coverage and characteristics of the shotgun metagenomic sequencing data are detailed in . The tropical peatland microbiota constituted mainly Bacteria (69%–79%), followed by Eukaryota (20%–29%), mainly Arthropoda , Chordata , and Streptophyta . Fungal DNA accounted for 8%–9% of the eukaryotic DNA, dominated by Ascomycota and Basidiomycota . Archaea constituted less than 3% of the total reads, while viruses, represented by Uroviricota , Taleaviricota , and Artverviricota , contributed 0.4%. Coverage estimates and Nonpareil sequence diversity revealed that the diversity of the microbiome was lower in the oil palm plantation compared to the forest . However, the microbiota showed similarities in the forest, land preparation, and plantation, with Proteobacteria , Actinobacteria , and Acidobacteria dominating . Land use change significantly influenced the relative abundance of Verrucomicrobia ( P = 0.012), while seasonal variation shaped Planctomyces populations ( P = 0.014). In addition, peat depth played a role in the relative abundance of Proteobacteria , Actinobacteria , Firmicutes , Bacteroidetes , Cyanobacteria , and Chloroflexi . The prokaryotic communities were dominated by the relatively abundant families from the phyla Proteobacteria , Actinobacteria , and Acidobacteria . In particular, the transition from forest to plantation led to a decrease in the relative abundance of the families Bradyrhizobiaceae , Mycobacteriaceae , and Streptomycetaceae . The family Acidobacteriaceae , which dominates within Acidobacteria , remained predominant and was unaffected by peat depth. The archaeal community was dominated by members of the phyla Euryarchaeota , “ Candidatus Thermoplasmatota” and Thaumarchaeota , with significant differences in relative abundance due to land use change . Several metanogenic taxa were identified, including Methanobacteriales , Methanocellales , Methanococcales , Methanoliparales , Methanomassilicoccales , Methanomicrobiales , Methanonatronarchaeales , and Methanosarcinales . Furthermore, the Thaumarchaeota also includes ammonia-oxidizing archaea (AOA) belonging to the class Nitrososphaeria . The prokaryotic communities differed in response to land use change . Microbial community composition was more similar between forest and land preparation samples, with the greatest differences observed between the forest and plantation phases . The ordination plot showed that prokaryotic communities transitioning more recently retained similarities to the initial land use. The low groundwater levels in the plantation potentially influenced humification levels, as shown by the higher PSI values. Changes in microbial composition were also associated with variations in the C:N ratio, total nitrogen, and N 2 O emissions. Through Mantel analyses, humification level showed the strongest correlation with prokaryotic diversity . Other factors, including concentrations of ammonium and phosphate, groundwater level, pH, and C:N ratio, also showed a significant correlation with prokaryote diversity . The assembly statistics and the functional classification of the Cluster of Orthologous (COGs) are provided in , respectively. The analysis of CAZymes highlighted the differences in carbohydrate processing, particularly in the decomposition of plant litter, which involves the breakdown of lignin, cellulose, and hemicellulose by different enzymes. In , the CAZymes heatmap is categorized by selected glycoside hydrolase (GH) and auxiliary activity (AA) families. Amylolytic enzymes from the GH families 13, 15, 57, and 97, which act on glucosidic bonds in starch or short oligosaccharides, were identified. Furthermore, cellulolytic enzymes from GH6 and GH148, including hemicellulosic enzymes from GH43, GH67, and GH113, were detected. These enzymes were present in the expected range in the upper layer of forest and land preparation peat samples; however, they were particularly scarce in the lower layer. Specific GH groups that include a combination of cellulolytic and hemicellulosic enzymes, such as GH1, GH2, GH3, GH5, and others, were distinguished on the heatmap . The analysis also revealed the presence of AA families targeting lignin. Hemicellulolytic and ligninolytic enzymes acting on recalcitrant peat components were widespread in disturbed peat (land preparation and oil palm plantation). However, the plantation samples cluster separately from the forest and land preparation. Notably, sequences associated with GH1 and GH5 are also prevalent in the plantation samples. Microbial methane-cycling functional genes were analyzed primarily through gene-based approaches, as detailed in the Materials and Methods, with metagenome-assembled genomes (MAGs) employed selectively to complement these analyses. The heatmap and cluster analysis of GHGs production and consumption revealed distinct patterns across land uses . Plantation samples clustered separately. In contrast, forest and land preparation samples exhibited comparable functional potential profiles. The mcrA gene abundance was higher in waterlogged forests , with major methanogens identified were Methanocellales and Methanosarcinales . We also recovered MAGs encoding mcrA , assigned to the phylum Halobacteriota (GTDB classification), linked to Methanocellales and Methanosarcinales , thereby reinforcing and supporting the gene-centric analyses. In addition, genes related to CH 4 production ( mcrABC ) showed significant correlations with PSI, C:N ratio, and CO 2 fluxes . Non-methanogenic CH 4 production through phosphonate ( phnJ ) demethylation was also observed across land uses . The α-subunit copper-containing membrane-bound particulate CH 4 monooxygenase, encoded by the pmoA gene, was the predominant methanotrophic trait observed in our samples. Methanotrophic Alphaproteobacteria , particularly genera such as Methylocystis , Methylosinus , and Bradyrhizobium , were identified in all land uses and potentially CH 4 regulators in tropical peatlands . Importantly, the binning of Methylocystis MAGs provided further evidence supporting CH 4 oxidation capabilities, underscoring the findings from our gene-centric analyses. In addition, pmoA genes were found in Gammaproteobacteria (mainly Methylococcales ) and Verrucomicrobia . Genes involved in CH 4 oxidation ( pmoABC ) were correlated with groundwater level, CH 4 fluxes, total nitrogen, C:N ratio, ammonium, and phosphate levels . For the α-subunit iron-containing cytoplasmic soluble CH 4 monooxygenase, encoded by the mmoX genes, was found in Alphaproteobacteria , Gammaproteobacteria , and Actinobacteria . The mmoX genes showed weak correlations with total nitrogen, C:N ratio, and concentration of ammonium and phosphate . This study also highlighted the putative role of anaerobic oxidation of CH 4 by archaea and bacteria, as these taxa were present in the 50 cm peat depth. Potential anaerobic CH 4 oxidizers were the anaerobic methanotrophic archaea (ANME) members (“ Candidatus Methanohalobium,” Methanoperedenans ) and NC10 methanotrophic bacteria (“ Candidatus Methylomirabilis”). Only Methylomirabilota MAGs were recovered for the data sets, while MAGs for ANME members, such as “ Candidatus Methanohalobium” and Methanoperedenans , were not detected. The gene-centric analyses identified sequences that may play key roles in nitrification, primarily driven by AOA and ammonia-oxidizing bacteria (AOB), integral to the nitrogen cycle. The dominant AOA was Thaumarchaeota (class Nitrosophaeria ), detected and found across all land uses, while most AOB were Proteobacteria . The amoA genes exceeded the nitrite reductase ( nir ) genes abundance in the later stage of the land preparation, suggesting conditions that favor ammonium-oxidizing microorganisms . Ammonium oxidation genes were correlated to groundwater levels and CH 4 fluxes . Nitrification also likely contributes to N 2 O production in the oil palm plantation, as evidenced by a threefold increase in the amoA to nosZ ratio . Furthermore, Nitrosotalea MAGs encoding the amoABC were recovered, suggesting their possible role in nitrification and their significance in the nitrogen cycle. In the oil palm plantation, the nirK genes exceeded nirS count by 3–13-fold . Across different land uses, the relative abundance of nirK genes was higher than nirS , with nirS exceeding nirK gene abundance only in the top layer January 2016 forest sample, indicating dominant nirK -type denitrification in tropical peatlands . The major genera of nirK -type denitrifiers were Bradyrhizobium , Pseudomonas , Mesorhizobium , Rhizobium , Rhodopseudomonas , Bosea , Paracoccus , Achromobacter , Sinorhizobium , and Ensifer . Mantel analyses of nir genes showed correlations with groundwater levels, CO 2 , and N 2 O fluxes . The direct source of N 2 O, facilitated by norB , showed a higher gene abundance in deeper peat layers. The norB- encoding prokaryotes belong to members of Proteobacteria ( Burkholderia and Ralstonia ), Acidobacteria ( Terriglobus and Paludibaculum ), Actinobacteria ( Nonomuraea ), and Planctomycetes ( Gemmataceae ; ). In addition, Bacteroidetes , Chlamydiae , Chloroflexi , Cyanobacteria , Gemmatimonadetes , Nitrospirae , Spirochaetes , and Verrucomicrobia also encoded norB , indicating broad microbial contribution to N 2 O production across land uses. Most MAGs recovered encoding norB genes were affiliated with Pseudomonadota and Acidobacteriota . Correlation analyses of norB genes indicated a weak correlation with total nitrogen and the C:N ratio . The nosZ gene, which mediates the final step of denitrification, is the only known biological process that converts N 2 O to N 2 . Taxa encoding complete denitrification genes, such as Magnetospirilium , Ralstonia , Burkholderia , Paraburkholderia , Dyella , and Terriglobia were more abundance during the land preparation and in the oil palm plantation . Predominant taxa encoding the nosZ gene include members of the phyla Proteobacteria and Acidobacteria . In addition, the relative abundance of Bradyrhizobium and Methylocystis decreased as the forest transitioned to oil palm plantation . The ratio of nitrite reducers (sum of nirK and nirS ) to N 2 O reducers ( nosZ ) was the highest in January 2020 of the oil palm plantation sample, increasing from 7- to 11-fold . The abundance of norB genes was three times higher than that of nosZ . The main N 2 O producers are possibly the nirK -type denitrifiers with gene abundance exceeding nirS -type denitrifiers. This coincides with the increase in N 2 O fluxes observed in January 2020 . The fermentative DNRA ( nirB ) genes were more relatively abundant than the respiratory DNRA ( nrfA ). Members of Proteobacteria (order Burkholderiales , Caulobacterales , Hyphomicrobiales , Lactobacillales , Methylococcales , Nevkiales , Pseudomonadales , Rhodospirilales , and Xanthomonadales ) encoded nirB -mediated DNRA. The nrfA -mediated DNRA was dominated by the class Terriglobia ( Acidobacteria ), with lower occurrence in the plantation samples. The DNRA pathway produced N 2 O as a by-product, and the lower abundance of DNRA genes in the oil palm plantation suggests a minor contribution to N 2 O emissions. Metagenomic analysis also revealed a widespread presence of nitrogen fixation ( nif ) genes in archaea and bacteria. Alphaproteobacteria was the dominant nitrogen fixers in tropical peatlands, with minor contributions by Beta- , Delta- , and Gammaproteobacteria . The nif genes were also detected in Acidobacteria , Actinobacteria , Bacteroidetes , Chlorobi , Chloroflexi , Nitrospirae , Planctomycetes , and Verrucomicrobia , indicating broad community participation in nitrogen-fixing. The major taxonomic families of diazotrophs are shown in , with Bradyrhizobiaceae decreasing when the site transitioned to an oil palm plantation. Overall, the forest samples from January and August 2016 had a higher abundance of nifH genes than the plantation (January and August 2020; ). In addition, nifH genes were detected in methanogens (i.e., Methanomicrobium , Methanothrix , Methanocella , Methanoregula , Methanosarcina , Methanolinea , and Methanolobus ) and ANME members (“ Candidatus Methanoperedens”), suggesting a possible coupling of nitrogen fixation to CH 4 metabolism. Land development in tropical peatlands alters carbon and nitrogen cycles due to changes in vegetation, litter accumulation, and decomposition rate . In oil palm plantations, groundwater levels are deliberately lowered to approximately 50 cm, as oil palm feeder roots are most active at this depth for optimal crop growth and root development . Groundwater levels in oil palm plantations on tropical peatlands are managed through drains, canals, and water-blocking structures (weirs) for water retention and drainage . Adjustments to groundwater levels are made depending on the stage of oil palm development. In this study, the secondary peat swamp forest transitioned from being a net N 2 O sink and a source of CO 2 and CH 4 to a net GHG source with increased CO 2 and N 2 O emissions during land preparation and in the oil palm plantation . Higher soil temperatures likely stimulate microbial activities that increase GHG emissions . As land use changes, the removal and alteration of aboveground vegetation affect humidity and temperature, while lowering the groundwater table increases the oxic layer, affecting peat decomposition and GHG fluxes . Increased CO 2 emissions have been attributed to heightened oxidative peat decomposition . High groundwater levels and moisture content could saturate peat layers and restrict aeration, increasing CH 4 emissions . Nitrate levels possibly increased as a result of the mineralization of dead plant material after land clearing . In addition, surplus nitrogen, primarily from nitrogen-based fertilizers, exceeds plant requirements and is transformed in the soil, increasing N 2 O fluxes in the oil palm plantation. Long-term over-fertilization in the plantations can lead to soil acidification, reduce microbial diversity, and increase GHG emissions . Building on these known effects, our study investigates how land use changes affect prokaryotic communities and the functional potential of specific microbial groups. Additionally, this report addresses the underrepresentation of tropical peatland metagenomic studies compared to other climate zones. Land use change influences prokaryote composition Microbial communities in recently transitioned soils often retain taxa similarities to those of their previous land uses . Over time, these communities gradually develop unique traits specific to the new land use. Our results showed that the microbial composition in the secondary peat swamp forest was more similar to the land preparation phase than in the oil palm plantation. This suggests that while microbial community changes begin during the land preparation phase, more pronounced shifts occur with the introduction of oil palm seedlings and fertilization. The rhizosphere of young oil palms and the influx of nutrients from fertilization probably lead to significant shifts in microbial population structure and functions . The predominant bacterial phyla in all three land use types —Proteobacteria , Actinobacteria , and Acidobacteria —as well as the archaeal phyla Euryarchaeota and “ Candidatus Thermoplasmatota” , are consistent with reports from other studies on tropical peatland microbiomes . Proteobacteria , the most abundant taxa, play a key role in carbon and nitrogen cycling, while Actinobacteria is essential plant decomposers . Acidobacteria can survive in various oxygen gradients, utilize different carbohydrates and nitrogen sources, and are well suited for nutrient-limited peatlands . In boreal and temperate peatlands, Acidobacteria are more dominant than Proteobacteria . However, Acidobacteria ecological traits remained largely undescribed and individual clades can adapt to different habitats . Microbial CH 4 production and oxidation in tropical peatland This study identified core taxa and the putative primary biogeochemical processes governing CH 4 and N 2 O in tropical peatlands , which include Proteobacteria , Acidobacteria , Euryarchaeota , and Thaumarchaeota . Waterlogged and anaerobic conditions in the secondary peat swamp forest likely contributed to higher CH 4 emissions . Methanogenesis involves diverse microbial groups with substrates supplied through fermentation and acetogenesis. Methanomicrobia , specifically the orders Methanocellales and Methanosarcinales , were the main methanogens . However, CH 4 fluxes in tropical peatland can vary throughout the year and are influenced by vegetation, groundwater levels, and nutrient levels . Although CH 4 and N 2 O concentrations are lower than CO 2 , they are more potent GHGs per molecule, which is a concern in disturbed soils . Most CH 4 is produced through two main pathways, acetoclastic and hydrogenotrophic methanogenesis, depending on substrate availability . Dominant Methanocellales in the secondary peat swamp forest ecosystem suggests that hydrogenotrophic methanogenesis is the primary source of CH 4 in this environment. The hydrogenotrophic methanogenesis is favored as it produces more energy than acetoclastic methanogenesis in nutrient-limited conditions . Moreover, methanogens can couple this process with nitrogen fixation, providing an alternative source for nitrogen in anoxic conditions . However, methanogenesis can also be outcompeted for substrates by sulfur-reducing bacteria (i.e., acetate, hydrogen, and CO 2 ), which could suppress CH 4 emissions . Although methanogens dominate CH 4 production, facultative anaerobic wood-rot fungi have also been reported to emit CH 4 through the halomethane-dependent pathway . In our study, Proteobacteria and Verrucomicrobia dominated aerobic CH 4 oxidation. Methanotrophic Alphaproteobacteria , especially the family Methylocystaceae , can be found across peatlands in the tropics (South America and Southeast Asia) and boreal regions (North America) . These ubiquitous Methylocystaceae are resilient to changes in the aboveground vegetation . As for anaerobic CH 4 oxidation, Methanoperedens nitroreducens can convert CH 4 to CO 2 through reverse methanogenesis . This archaea then supplies nitrite to methanotrophic bacteria such as “ Candidatus Methylomirabilis” to facilitate anaerobic CH 4 oxidation . Denitrification as primary N 2 O source in disturbed tropical peatland A 2-year-old oil palm plantation in our study had higher CO 2 and N 2 O emissions compared to previous land uses. The GHG fluxes aligned with plantation practices that lower groundwater levels and apply nitrogen-based fertilizers (e.g., NPK [nitrogen, phosphorus, and potassium] compound fertilizer, urea, ammonium chloride, ammonium nitrate, and ammonium sulfate) to promote root development, growth, and yield potential of young oil palms . Higher nitrogen-based fertilization in mature oil palm plantations could lead to higher N 2 O emissions and peat decomposition . However, overall CO 2 and N 2 O emissions from well-managed oil palm plantations can decrease over time through appropriate nutrient and water management strategies as the oil palms mature . Nitrification initiated by AOA and AOB indirectly produces N 2 O . Similar to our results, Nitrososphaera , which encodes the amoA genes, is dominant AOA in the low-nutrient acidic tropical peatlands . The AOA has a higher substrate affinity, which gives them a competitive advantage in environments with low ammonia concentrations . However, under conditions of increased nitrogen fertilization and liming, N 2 O emissions are likely by-product of nitrification by AOB nitrification as AOB emit higher levels of N 2 O than AOA . Therefore, managed peatlands that utilized slow-released ammonia fertilizers may allow AOA to dominate nitrification, potentially resulting in lower N 2 O emissions. Similar to nitrification, DNRA or nitrate ammonification is a minor contributor to N 2 O production based on gene relative abundance . Lower DNRA gene abundance indicates that the ecosystem does not retain nitrogen, instead favors removing excess nitrogen through denitrification . In our study, denitrification appears as the primary source of N 2 O, evidenced by gene ratios and diverse nirK- type denitrifiers community identified. The nirK gene encodes for copper-containing nitrite reductase protein and elevated copper levels induced by copper fertilizer might favor nirK -type denitrifiers . Higher nirK -type denitrifiers abundance and diversity than nirS -type have been observed in other natural and drained peatlands . Nitric oxide reductases ( norB ), part of the denitrification pathway, are the direct source of N 2 O and can also be found encoded by diverse bacterial communities . Although we could not detect the norB gene in archaea, complete denitrification pathways have been reported for Haloarculaceae and Haloferacaceae . The co-occurrence of denitrification genes in different prokaryotes emphasized the modularity of this pathway, in which intermediate molecules interact with other nitrogen metabolic pathways . Incomplete denitrification without the nosZ gene was the major source of N 2 O in this study . Increased N 2 O emissions were attributed to N 2 O production exceeding nosZ activities . In nitrogen-rich environments, Proteobacteria and Acidobacteria are the predominant nosZ -encoding taxa . Bradyrhizobium , Methylocystis , Ralstonia , Burkholderia , Paraburkholderia , and Terriglobus , which encode nosZ , could reduce N 2 O emissions in tropical peatlands . Some non-denitrifiers also contribute to N 2 O reduction and encode only the nosZ genes without the nir or norB genes for energy conservation . The lower gene abundance of nifH in the oil palm plantation suggested potential suppression of diazotrophs . Diazotrophic bacteria such as Bradyrhizobium that encode nosZ have the potential to regulate soil N 2 O emissions . Therefore, suppression of Bradyrhizobium in the oil palm plantation can potentially affect N 2 O reduction to N 2 . Other diazotrophs, such as the oligotrophic Geobacter and Anaeromyxobacter , may also be suppressed in nutrient-rich environments . These oligotrophic taxa can be outcompeted by fast-growing copiotrophic taxa in long-term fertilization. The forest site in this study, initially a net N 2 O sink, transitioned to a net N 2 O source as nitrate and humification levels increased in response to land clearing and conversion to an oil palm plantation. The nir genes positively correlated with N 2 O fluxes . However, N 2 O emissions can vary across natural and managed ecosystems depending on factors such as groundwater table, soil carbon, and nitrogen availability . Aerobic and anaerobic microsites near the soil surface in managed land with high carbon and nitrogen availability may promote coupled nitrification-denitrification reactions . Denitrification by norB occurs within these microsites with limited or no oxygen, influenced by rainfall and groundwater table fluctuation . Dominant nirK -type denitrifiers are more likely to perform incomplete denitrification, and consequently, the presence of higher nir to nosZ genes leads to higher N 2 O emissions . Nitrous oxide emissions also correlate positively with nitrogen fertilization rates . Therefore, the oil palm industry can potentially mitigate N 2 O emissions by balancing nitrogen fertilization practices to reduce over-fertilization while still meeting plant nutrient requirements and by implementing strategies to minimize nitrogen losses through microbial denitrification. The findings in this study highlight that disturbed tropical peatland ecosystems emit substantial GHGs driven by specific microbial groups and conditions. While these observations may be site specific, the study provides valuable insights into the dynamics of microbial composition and GHG emissions over the monitoring period from 2016 to 2020, considering specific ecosystem functions, environmental parameters, and peat chemical properties. The relative abundance of microbial composition may not fully capture changes in absolute abundance, and the predicted functions may not always align with active functions. Nonetheless, our approach offers a comprehensive assessment of tropical peatland microbial communities, covering a wide range of taxa and functional potential through reads and gene-based analyses. Future studies could benefit from incorporating multi-omics data to elucidate active biogeochemical processes and track changes in microbiome composition and functions in response to land use changes . Conclusion This study investigated the impact of land use change on prokaryotic communities, peat chemistry, plant litter decomposition, GHG emissions, and ecosystem functioning. Prokaryotic communities correlated with humification levels, groundwater table, pH, C:N ratio, and concentration of ammonium and phosphate. Although CH 4 fluxes from soil were negligible, mcrA genes associated with Methanocellales and Methanosarcinales were present across different land uses. The CH 4 fluxes correlated with the groundwater table, humification levels, and the C:N ratio. Major CH 4 oxidizers, particularly the Methylocystis group, which also encodes for nosZ genes, were negatively affected by land use changes, potentially influencing N 2 O regulation potential. Microbial community functional potential through gene abundance and the ratio of nir to amoA and nosZ suggests that N 2 O production was primarily driven by denitrification with minor contributions from nitrification. The N 2 O fluxes were correlated with the groundwater table, total nitrogen, and C:N ratio. Agricultural practices, such as lowering groundwater levels and fertilization, could stimulate denitrifying microbial communities. Land use changes have transformed the forest site from a CO 2 and CH 4 source and N 2 O sink to a source of CO 2 , CH 4 and N 2 O following land preparation and oil palm cultivation. This study suggests that limiting soil carbon and nitrogen availability may be crucial for regulating microbial-mediated GHGs. While these findings shed light on the genetic potential of tropical peatland microbiome, further research is needed to validate the ecological contributions of less-common taxa and whether the inferred CH 4 and N 2 O metabolic pathways are active. Ideally, long-term monitoring of tropical peatlands is essential to assess ecosystem resilience and inform sustainable management practices. Microbial communities in recently transitioned soils often retain taxa similarities to those of their previous land uses . Over time, these communities gradually develop unique traits specific to the new land use. Our results showed that the microbial composition in the secondary peat swamp forest was more similar to the land preparation phase than in the oil palm plantation. This suggests that while microbial community changes begin during the land preparation phase, more pronounced shifts occur with the introduction of oil palm seedlings and fertilization. The rhizosphere of young oil palms and the influx of nutrients from fertilization probably lead to significant shifts in microbial population structure and functions . The predominant bacterial phyla in all three land use types —Proteobacteria , Actinobacteria , and Acidobacteria —as well as the archaeal phyla Euryarchaeota and “ Candidatus Thermoplasmatota” , are consistent with reports from other studies on tropical peatland microbiomes . Proteobacteria , the most abundant taxa, play a key role in carbon and nitrogen cycling, while Actinobacteria is essential plant decomposers . Acidobacteria can survive in various oxygen gradients, utilize different carbohydrates and nitrogen sources, and are well suited for nutrient-limited peatlands . In boreal and temperate peatlands, Acidobacteria are more dominant than Proteobacteria . However, Acidobacteria ecological traits remained largely undescribed and individual clades can adapt to different habitats . 4 production and oxidation in tropical peatland This study identified core taxa and the putative primary biogeochemical processes governing CH 4 and N 2 O in tropical peatlands , which include Proteobacteria , Acidobacteria , Euryarchaeota , and Thaumarchaeota . Waterlogged and anaerobic conditions in the secondary peat swamp forest likely contributed to higher CH 4 emissions . Methanogenesis involves diverse microbial groups with substrates supplied through fermentation and acetogenesis. Methanomicrobia , specifically the orders Methanocellales and Methanosarcinales , were the main methanogens . However, CH 4 fluxes in tropical peatland can vary throughout the year and are influenced by vegetation, groundwater levels, and nutrient levels . Although CH 4 and N 2 O concentrations are lower than CO 2 , they are more potent GHGs per molecule, which is a concern in disturbed soils . Most CH 4 is produced through two main pathways, acetoclastic and hydrogenotrophic methanogenesis, depending on substrate availability . Dominant Methanocellales in the secondary peat swamp forest ecosystem suggests that hydrogenotrophic methanogenesis is the primary source of CH 4 in this environment. The hydrogenotrophic methanogenesis is favored as it produces more energy than acetoclastic methanogenesis in nutrient-limited conditions . Moreover, methanogens can couple this process with nitrogen fixation, providing an alternative source for nitrogen in anoxic conditions . However, methanogenesis can also be outcompeted for substrates by sulfur-reducing bacteria (i.e., acetate, hydrogen, and CO 2 ), which could suppress CH 4 emissions . Although methanogens dominate CH 4 production, facultative anaerobic wood-rot fungi have also been reported to emit CH 4 through the halomethane-dependent pathway . In our study, Proteobacteria and Verrucomicrobia dominated aerobic CH 4 oxidation. Methanotrophic Alphaproteobacteria , especially the family Methylocystaceae , can be found across peatlands in the tropics (South America and Southeast Asia) and boreal regions (North America) . These ubiquitous Methylocystaceae are resilient to changes in the aboveground vegetation . As for anaerobic CH 4 oxidation, Methanoperedens nitroreducens can convert CH 4 to CO 2 through reverse methanogenesis . This archaea then supplies nitrite to methanotrophic bacteria such as “ Candidatus Methylomirabilis” to facilitate anaerobic CH 4 oxidation . 2 O source in disturbed tropical peatland A 2-year-old oil palm plantation in our study had higher CO 2 and N 2 O emissions compared to previous land uses. The GHG fluxes aligned with plantation practices that lower groundwater levels and apply nitrogen-based fertilizers (e.g., NPK [nitrogen, phosphorus, and potassium] compound fertilizer, urea, ammonium chloride, ammonium nitrate, and ammonium sulfate) to promote root development, growth, and yield potential of young oil palms . Higher nitrogen-based fertilization in mature oil palm plantations could lead to higher N 2 O emissions and peat decomposition . However, overall CO 2 and N 2 O emissions from well-managed oil palm plantations can decrease over time through appropriate nutrient and water management strategies as the oil palms mature . Nitrification initiated by AOA and AOB indirectly produces N 2 O . Similar to our results, Nitrososphaera , which encodes the amoA genes, is dominant AOA in the low-nutrient acidic tropical peatlands . The AOA has a higher substrate affinity, which gives them a competitive advantage in environments with low ammonia concentrations . However, under conditions of increased nitrogen fertilization and liming, N 2 O emissions are likely by-product of nitrification by AOB nitrification as AOB emit higher levels of N 2 O than AOA . Therefore, managed peatlands that utilized slow-released ammonia fertilizers may allow AOA to dominate nitrification, potentially resulting in lower N 2 O emissions. Similar to nitrification, DNRA or nitrate ammonification is a minor contributor to N 2 O production based on gene relative abundance . Lower DNRA gene abundance indicates that the ecosystem does not retain nitrogen, instead favors removing excess nitrogen through denitrification . In our study, denitrification appears as the primary source of N 2 O, evidenced by gene ratios and diverse nirK- type denitrifiers community identified. The nirK gene encodes for copper-containing nitrite reductase protein and elevated copper levels induced by copper fertilizer might favor nirK -type denitrifiers . Higher nirK -type denitrifiers abundance and diversity than nirS -type have been observed in other natural and drained peatlands . Nitric oxide reductases ( norB ), part of the denitrification pathway, are the direct source of N 2 O and can also be found encoded by diverse bacterial communities . Although we could not detect the norB gene in archaea, complete denitrification pathways have been reported for Haloarculaceae and Haloferacaceae . The co-occurrence of denitrification genes in different prokaryotes emphasized the modularity of this pathway, in which intermediate molecules interact with other nitrogen metabolic pathways . Incomplete denitrification without the nosZ gene was the major source of N 2 O in this study . Increased N 2 O emissions were attributed to N 2 O production exceeding nosZ activities . In nitrogen-rich environments, Proteobacteria and Acidobacteria are the predominant nosZ -encoding taxa . Bradyrhizobium , Methylocystis , Ralstonia , Burkholderia , Paraburkholderia , and Terriglobus , which encode nosZ , could reduce N 2 O emissions in tropical peatlands . Some non-denitrifiers also contribute to N 2 O reduction and encode only the nosZ genes without the nir or norB genes for energy conservation . The lower gene abundance of nifH in the oil palm plantation suggested potential suppression of diazotrophs . Diazotrophic bacteria such as Bradyrhizobium that encode nosZ have the potential to regulate soil N 2 O emissions . Therefore, suppression of Bradyrhizobium in the oil palm plantation can potentially affect N 2 O reduction to N 2 . Other diazotrophs, such as the oligotrophic Geobacter and Anaeromyxobacter , may also be suppressed in nutrient-rich environments . These oligotrophic taxa can be outcompeted by fast-growing copiotrophic taxa in long-term fertilization. The forest site in this study, initially a net N 2 O sink, transitioned to a net N 2 O source as nitrate and humification levels increased in response to land clearing and conversion to an oil palm plantation. The nir genes positively correlated with N 2 O fluxes . However, N 2 O emissions can vary across natural and managed ecosystems depending on factors such as groundwater table, soil carbon, and nitrogen availability . Aerobic and anaerobic microsites near the soil surface in managed land with high carbon and nitrogen availability may promote coupled nitrification-denitrification reactions . Denitrification by norB occurs within these microsites with limited or no oxygen, influenced by rainfall and groundwater table fluctuation . Dominant nirK -type denitrifiers are more likely to perform incomplete denitrification, and consequently, the presence of higher nir to nosZ genes leads to higher N 2 O emissions . Nitrous oxide emissions also correlate positively with nitrogen fertilization rates . Therefore, the oil palm industry can potentially mitigate N 2 O emissions by balancing nitrogen fertilization practices to reduce over-fertilization while still meeting plant nutrient requirements and by implementing strategies to minimize nitrogen losses through microbial denitrification. The findings in this study highlight that disturbed tropical peatland ecosystems emit substantial GHGs driven by specific microbial groups and conditions. While these observations may be site specific, the study provides valuable insights into the dynamics of microbial composition and GHG emissions over the monitoring period from 2016 to 2020, considering specific ecosystem functions, environmental parameters, and peat chemical properties. The relative abundance of microbial composition may not fully capture changes in absolute abundance, and the predicted functions may not always align with active functions. Nonetheless, our approach offers a comprehensive assessment of tropical peatland microbial communities, covering a wide range of taxa and functional potential through reads and gene-based analyses. Future studies could benefit from incorporating multi-omics data to elucidate active biogeochemical processes and track changes in microbiome composition and functions in response to land use changes . This study investigated the impact of land use change on prokaryotic communities, peat chemistry, plant litter decomposition, GHG emissions, and ecosystem functioning. Prokaryotic communities correlated with humification levels, groundwater table, pH, C:N ratio, and concentration of ammonium and phosphate. Although CH 4 fluxes from soil were negligible, mcrA genes associated with Methanocellales and Methanosarcinales were present across different land uses. The CH 4 fluxes correlated with the groundwater table, humification levels, and the C:N ratio. Major CH 4 oxidizers, particularly the Methylocystis group, which also encodes for nosZ genes, were negatively affected by land use changes, potentially influencing N 2 O regulation potential. Microbial community functional potential through gene abundance and the ratio of nir to amoA and nosZ suggests that N 2 O production was primarily driven by denitrification with minor contributions from nitrification. The N 2 O fluxes were correlated with the groundwater table, total nitrogen, and C:N ratio. Agricultural practices, such as lowering groundwater levels and fertilization, could stimulate denitrifying microbial communities. Land use changes have transformed the forest site from a CO 2 and CH 4 source and N 2 O sink to a source of CO 2 , CH 4 and N 2 O following land preparation and oil palm cultivation. This study suggests that limiting soil carbon and nitrogen availability may be crucial for regulating microbial-mediated GHGs. While these findings shed light on the genetic potential of tropical peatland microbiome, further research is needed to validate the ecological contributions of less-common taxa and whether the inferred CH 4 and N 2 O metabolic pathways are active. Ideally, long-term monitoring of tropical peatlands is essential to assess ecosystem resilience and inform sustainable management practices. Study site description Fieldwork for this study was conducted from 2016 to 2020, covering the transition of land from a secondary peat swamp forest to a cleared land prepared for oil palm planting in a 9 × 9 m triangular pattern . Field sampling trips were conducted in months indicated in to measure GHG emissions, associated peat chemical properties, environmental variables, microbial communities, and genetic potential affected by land use change. The mean annual temperature is about 27°C, with the annual precipitation ranging from 2,734 to 3,312 mm during this period. Temperature and rainfall data were retrieved from the Department of Irrigation and Drainage of Malaysia in Sri Aman, Sarawak. Historically, the tropical peat swamp forest remained waterlogged throughout the years except during the dry season (May to September) when groundwater levels temporarily receded below the peat surface. The wet season, marked by heavy rainfall, peaks in January and spans from November to March . Following land clearance, groundwater levels were artificially managed (lowered or raised) to support the establishment of oil palm plantations on tropical peatlands, using drains, canals, and water-blocking structures (weirs) . Chemical fertilizers for young oil palm trees were applied by the plantation management in two to three rounds a year, beginning in June 2018 and subsequent in April through May and September through October, avoiding months with high rainfall intensity or dry periods, with the amount adjusted based on palm age and crop requirement. Young palms received the following annual amount: nitrogen in the form of 1.0–2.0 kg of ammonium sulfate and 0.5–1.5 kg urea; phosphorus in 2.0–3.0 kg of rock phosphate; potassium in 1.5–2.5 kg muriate of potash; and micronutrients in 0.1–0.2 of copper, zinc, and borate. Soil respiration sampling We sampled soil respiration (CO 2 , CH 4 , and N 2 O gases) using the closed-chamber method . At each sampling time, eight open-ended stainless-steel cylinders (25 cm height; 10 cm radius, n = 8) were randomly installed in situ during the different years as the land transitioned from a secondary peat swamp forest to the land preparation phase and then to an oil palm plantation . In the secondary peat swamp forest and land preparation, the chambers were placed randomly within a 50 m radius. In the oil palm plantation, the chambers were installed 1.5 m from the base of the oil palm trunks . For estimating CO 2 soil respiration, 250 cm 3 surface air was collected at zero minute, followed by another 250 cm 3 air sampling from each closed chamber at 4-minute intervals. These samples were then transferred into Tedlar gas sampling bags using a 25 mL syringe connected to the lid of the closed chamber through silicone tubes. The linear relationship between CO 2 efflux and time using the closed-chamber method has been independently validated in other tropical peatland studies, with sampling intervals of 4, 10, and 40 minutes demonstrating consistent proportional relationships . For CH 4 and N 2 O gas, 20 cm 3 of air samples was collected from each chamber at zero minute and subsequently after chamber closure at 10-, 20-, and 40-minute intervals . The 20 cm 3 air samples were transferred into pre-vacuumed gas chromatography (GC) vials and transported to the laboratory. Environmental variables, peat sampling, and groundwater measurement Relative humidity and air temperature were measured using TESTO 625 (Testo SE & Co. KGaA, Germany). Soil temperature (10 cm depth) was measured using Checktemp 1 HI-98509 digital thermometer (Hanna Instruments, USA). Soil moisture level was measured using the Soil Moisture Meter DIK-311F (Daiki, Japan). These parameters were measured (in six replicates) in the vicinity of each chamber during soil respiration sampling. Perforated polyvinyl chloride (PVC) pipes were installed in auger holes, and groundwater levels were determined by subtracting the height of the pipes from measurements taken from the water surface to the top of the pipes using a measuring tape ( n = 8 per sampling time). The following procedure was used to collect composite peat samples from eight sampling points for chemical analyses and metagenome extraction during each sampling time. A peat auger (Eijkelkamp, The Netherlands) was used to collect peat samples , from which composite peat was extracted. Large woody materials were removed, and samples were divided based on depths: 0–25 cm (top-layer, represented by “T” in sample ID) and 25–50 cm (bottom-layer, represented by “B” in sample ID; ; ). Each sampling point was augered in triplicates, divided, and then homogenized ( in situ ) to create composite samples of top and bottom layers. In each sampling time, two composite samples per depth were collected for chemical analysis. Similarly, for metagenome extraction, peat samples from all eight sampling points (multiple augering) were homogenized to form one large composite sample per depth. All pooled top- and bottom-layer samples were sealed in zip-locked bags and transported on ice. Upon arrival, samples for microbiome analysis were stored at −80°C until total DNA extraction. Peat samples for chemical analysis were air dried, sieved (2 mm), and stored at 4°C before analysis. Chemical analyses and greenhouse gas measurements The pH, PSI, total C , total N , C:N ratio, nitrate (ppm), ammonium (ppm), and phosphate contents were determined in the soil samples using standard procedures . The CO 2 gas concentration in the 250 cm 3 Tedlar bags was measured within 6 hours after collection using an Infrared CO 2 gas analyzer (Fuji Electric ZFPGC11, Japan) set up in the field. The gas vials with 20 cm 3 air samples were transported back to the laboratory, and the CH 4 gas concentration was measured using a gas chromatography system with a flame ionization detector (Agilent 7890A, USA). The N 2 O concentration was measured using a gas chromatography system with an electron capture detector (Agilent 7890A, USA). The CO 2 , CH 4 , and N 2 O gas fluxes were calculated based on the linear accumulation of gases with time in the closed chambers . Additional information on peat chemical analyses and soil respiration measurements was described in Supplementary Information S1 and S2. DNA extraction and purification Environmental DNA was extracted from 1.5 g of peat using FastDNA Spin Kit for Soil (MP Biomedical, USA) using three manufacturer’s microcentrifuge tubes, with each tube containing 0.5 g of peat. The samples were lysed using TissueLyser II (Qiagen, Germany) with bead beating at 30 Hz for 3 minutes, repeated five times with a minute on ice at intervals. Then, humic substances were removed using 500 µL of 5.5 M guanidine thiocyanate, and the DNA pellets were washed at least three times until the Binding Matrix beads returned to their original color. Further purification was done using Agencourt AMPure XP beads (Beckman Coulter Life Sciences, USA). Purified DNA was eluted with nuclease-free water. For each composite sample, high-quality purified DNA from the three extraction replicates was pooled to represent the samples accordingly. In total, the following numbers of samples were prepared for metagenome sequencing: secondary peat swamp forest (two top-layer samples and two bottom-layer samples), land preparation (three top-layer samples and three bottom-layer samples), and oil palm plantation (two top-layer samples and two bottom-layer samples; ; n = 14). Metagenomic sequencing and analyses The DNA was quantified with NanoPhotometer P360 (Implen GmbH, Germany) and QubitTM 4 Fluorometer (Invitrogen, Singapore) prior to metagenomic sequencing. Metagenomic library preparation and shotgun sequencing were performed in NovogeneAIT Genomics (Singapore). Briefly, total DNA was randomly sheared into fragments. The fragments were end-repaired, polyadenylated, and ligated with Illumina adapters before PCR amplification. Quantified libraries were sequenced using the NovaSeq 6000 platform (Illumina, CA, USA) with 2 × 150 paired-end read chemistry to a sequencing depth of 40 Gbp. The raw paired-end reads from the 14 metagenomes were processed with BBTools v38.94. The data set coverage was estimated with Nonpareil v3.304 . Information on the sequencing effort and estimated average coverage is described in . The read-based classifications were performed with Kraken2 v2.1.2 mapped to a non-redundant NCBI nt database . The relative abundance of microbiome profiles was then re-estimated with Bracken v2.6.2 and converted to biom files using kraken-biom v1.0.1 for further analyses . For gene-based analysis, the clean reads were error-corrected with bbcms.sh (BBTools) and assembled with MEGAHIT v1.2.9 . Assembled contigs were then assessed with metaQUAST v5.0.2 mapped with bbmap.sh (BBTools) and predicted for protein-coding sequences (CDSs) using Prodigal v2.6.3 . Functional annotation was performed using eggNOG-mapper v2.1.7 with eggNOG database v5.0.2 and DIAMOND in --iteration mode . The annotation best hits were screened for genes acting on CH 4 and nitrogen transformation processes with core genes listed in based on gene names and KEGG (Kyoto Encyclopedia of Genes and Genomes) Orthology entries. Carbohydrate-active enZymes (CAZy) were identified through the CAZy database using dbCAN v3.0.7 with HMMER and DIAMOND . Putative CDSs with at least one positive hit were selected for further annotation with NCBI non-redundant nr database, Swiss-Prot-curated protein sequence database, and Protein Data Bank database ( pdbaa ) using DIAMOND v2.1.8.162 and blastx . Distant homologs identified as false positives were removed. In addition, the “.daa” outputs from DIAMOND alignments to the nr database were used to assign taxonomic classification through the MEGANIZER program based on the naïve lowest common ancestor algorithm in MEGAN6 . In addition, assembled contigs were binned with CONCOCT v1.0.0, metaBAT2 v2.12.1, and MaxBin2 v2.2.6 with default parameters and consolidated within metaWRAP v.1.3.2 to recover MAGs . Bin quality was determined with CheckM2 v1.0.2, and draft bins with more than 50% completeness and less than 10% contamination based on MIMAG were checked with MAGPurify v2.1.2 to remove incorrectly binned contigs . The draft MAGs were dereplicated with dRep v3.0.0 with MAGs passing the quality threshold taxonomically classified by the Genome Taxonomy Database toolkit (GTDB-Tk v2.1.1, R207 v2) and functionally annotated with Distilled and Refined Annotation of Metabolism (DRAM v1.4.6) . The detailed parameters used in the metagenomic analyses are available in the Supplementary Information S3. Data analyses Statistical analyses and visualizations were conducted in R v4.3.1 with RStudio v.2023.03.1 . The R packages used were stats v4.3.1, phyloseq v1.44.0, and vegan v2.6–4 . Alpha diversity was estimated using Nonpareil sequence diversity ( N d ) based on rarefied coverage that combined richness and evenness to represent total diversity. The Nonpareil sequence diversity correlates with classic diversity indexes. Beta-diversity was analyzed with non-metric multidimensional scaling based on Bray-Curtis distances computed using the metaMDS function in the vegan package. The bi-plot was constructed with the envfit function to plot peat chemical properties and GHG measurements to the prokaryotic communities. The relative abundance of selected functional genes was quantified by mapping metagenomic reads to all predicted sequences, which were normalized with gene length to represent gene abundance within the microbial communities. Heatmaps for selected soil respiration genes, CAZymes, and COGs were constructed based on Z ‐score transformed data to improve normality and homogeneity of variances. Mantel test with Bray-Curtis distances using Spearman’s rank correlation was used to determine environmental variables and soil greenhouse gases correlation with microbial community composition and genes related to production and consumption of CH 4 and N 2 O. The ratio of mcrA to pmoA indicated the ratio between methanogenesis and CH 4 oxidation. Nitrification was compared to denitrification through the amoA to the sum of nirK and nirS ratio. The sum of nirK and nirS genes to nosZ and the ratio of norB to nosZ were used as indicators of denitrification-driven gaseous nitrogen loss potential. Data visualization was performed with ggplot2 v3.4.2, ComplexHeatmap v2.16.0, pheatmap v1.0.12, cowplot v1.1.1, and patchwork v1.1.2 R packages . Additional information for the data analyses can be found in the Supplementary Information S4. Fieldwork for this study was conducted from 2016 to 2020, covering the transition of land from a secondary peat swamp forest to a cleared land prepared for oil palm planting in a 9 × 9 m triangular pattern . Field sampling trips were conducted in months indicated in to measure GHG emissions, associated peat chemical properties, environmental variables, microbial communities, and genetic potential affected by land use change. The mean annual temperature is about 27°C, with the annual precipitation ranging from 2,734 to 3,312 mm during this period. Temperature and rainfall data were retrieved from the Department of Irrigation and Drainage of Malaysia in Sri Aman, Sarawak. Historically, the tropical peat swamp forest remained waterlogged throughout the years except during the dry season (May to September) when groundwater levels temporarily receded below the peat surface. The wet season, marked by heavy rainfall, peaks in January and spans from November to March . Following land clearance, groundwater levels were artificially managed (lowered or raised) to support the establishment of oil palm plantations on tropical peatlands, using drains, canals, and water-blocking structures (weirs) . Chemical fertilizers for young oil palm trees were applied by the plantation management in two to three rounds a year, beginning in June 2018 and subsequent in April through May and September through October, avoiding months with high rainfall intensity or dry periods, with the amount adjusted based on palm age and crop requirement. Young palms received the following annual amount: nitrogen in the form of 1.0–2.0 kg of ammonium sulfate and 0.5–1.5 kg urea; phosphorus in 2.0–3.0 kg of rock phosphate; potassium in 1.5–2.5 kg muriate of potash; and micronutrients in 0.1–0.2 of copper, zinc, and borate. We sampled soil respiration (CO 2 , CH 4 , and N 2 O gases) using the closed-chamber method . At each sampling time, eight open-ended stainless-steel cylinders (25 cm height; 10 cm radius, n = 8) were randomly installed in situ during the different years as the land transitioned from a secondary peat swamp forest to the land preparation phase and then to an oil palm plantation . In the secondary peat swamp forest and land preparation, the chambers were placed randomly within a 50 m radius. In the oil palm plantation, the chambers were installed 1.5 m from the base of the oil palm trunks . For estimating CO 2 soil respiration, 250 cm 3 surface air was collected at zero minute, followed by another 250 cm 3 air sampling from each closed chamber at 4-minute intervals. These samples were then transferred into Tedlar gas sampling bags using a 25 mL syringe connected to the lid of the closed chamber through silicone tubes. The linear relationship between CO 2 efflux and time using the closed-chamber method has been independently validated in other tropical peatland studies, with sampling intervals of 4, 10, and 40 minutes demonstrating consistent proportional relationships . For CH 4 and N 2 O gas, 20 cm 3 of air samples was collected from each chamber at zero minute and subsequently after chamber closure at 10-, 20-, and 40-minute intervals . The 20 cm 3 air samples were transferred into pre-vacuumed gas chromatography (GC) vials and transported to the laboratory. Relative humidity and air temperature were measured using TESTO 625 (Testo SE & Co. KGaA, Germany). Soil temperature (10 cm depth) was measured using Checktemp 1 HI-98509 digital thermometer (Hanna Instruments, USA). Soil moisture level was measured using the Soil Moisture Meter DIK-311F (Daiki, Japan). These parameters were measured (in six replicates) in the vicinity of each chamber during soil respiration sampling. Perforated polyvinyl chloride (PVC) pipes were installed in auger holes, and groundwater levels were determined by subtracting the height of the pipes from measurements taken from the water surface to the top of the pipes using a measuring tape ( n = 8 per sampling time). The following procedure was used to collect composite peat samples from eight sampling points for chemical analyses and metagenome extraction during each sampling time. A peat auger (Eijkelkamp, The Netherlands) was used to collect peat samples , from which composite peat was extracted. Large woody materials were removed, and samples were divided based on depths: 0–25 cm (top-layer, represented by “T” in sample ID) and 25–50 cm (bottom-layer, represented by “B” in sample ID; ; ). Each sampling point was augered in triplicates, divided, and then homogenized ( in situ ) to create composite samples of top and bottom layers. In each sampling time, two composite samples per depth were collected for chemical analysis. Similarly, for metagenome extraction, peat samples from all eight sampling points (multiple augering) were homogenized to form one large composite sample per depth. All pooled top- and bottom-layer samples were sealed in zip-locked bags and transported on ice. Upon arrival, samples for microbiome analysis were stored at −80°C until total DNA extraction. Peat samples for chemical analysis were air dried, sieved (2 mm), and stored at 4°C before analysis. The pH, PSI, total C , total N , C:N ratio, nitrate (ppm), ammonium (ppm), and phosphate contents were determined in the soil samples using standard procedures . The CO 2 gas concentration in the 250 cm 3 Tedlar bags was measured within 6 hours after collection using an Infrared CO 2 gas analyzer (Fuji Electric ZFPGC11, Japan) set up in the field. The gas vials with 20 cm 3 air samples were transported back to the laboratory, and the CH 4 gas concentration was measured using a gas chromatography system with a flame ionization detector (Agilent 7890A, USA). The N 2 O concentration was measured using a gas chromatography system with an electron capture detector (Agilent 7890A, USA). The CO 2 , CH 4 , and N 2 O gas fluxes were calculated based on the linear accumulation of gases with time in the closed chambers . Additional information on peat chemical analyses and soil respiration measurements was described in Supplementary Information S1 and S2. Environmental DNA was extracted from 1.5 g of peat using FastDNA Spin Kit for Soil (MP Biomedical, USA) using three manufacturer’s microcentrifuge tubes, with each tube containing 0.5 g of peat. The samples were lysed using TissueLyser II (Qiagen, Germany) with bead beating at 30 Hz for 3 minutes, repeated five times with a minute on ice at intervals. Then, humic substances were removed using 500 µL of 5.5 M guanidine thiocyanate, and the DNA pellets were washed at least three times until the Binding Matrix beads returned to their original color. Further purification was done using Agencourt AMPure XP beads (Beckman Coulter Life Sciences, USA). Purified DNA was eluted with nuclease-free water. For each composite sample, high-quality purified DNA from the three extraction replicates was pooled to represent the samples accordingly. In total, the following numbers of samples were prepared for metagenome sequencing: secondary peat swamp forest (two top-layer samples and two bottom-layer samples), land preparation (three top-layer samples and three bottom-layer samples), and oil palm plantation (two top-layer samples and two bottom-layer samples; ; n = 14). The DNA was quantified with NanoPhotometer P360 (Implen GmbH, Germany) and QubitTM 4 Fluorometer (Invitrogen, Singapore) prior to metagenomic sequencing. Metagenomic library preparation and shotgun sequencing were performed in NovogeneAIT Genomics (Singapore). Briefly, total DNA was randomly sheared into fragments. The fragments were end-repaired, polyadenylated, and ligated with Illumina adapters before PCR amplification. Quantified libraries were sequenced using the NovaSeq 6000 platform (Illumina, CA, USA) with 2 × 150 paired-end read chemistry to a sequencing depth of 40 Gbp. The raw paired-end reads from the 14 metagenomes were processed with BBTools v38.94. The data set coverage was estimated with Nonpareil v3.304 . Information on the sequencing effort and estimated average coverage is described in . The read-based classifications were performed with Kraken2 v2.1.2 mapped to a non-redundant NCBI nt database . The relative abundance of microbiome profiles was then re-estimated with Bracken v2.6.2 and converted to biom files using kraken-biom v1.0.1 for further analyses . For gene-based analysis, the clean reads were error-corrected with bbcms.sh (BBTools) and assembled with MEGAHIT v1.2.9 . Assembled contigs were then assessed with metaQUAST v5.0.2 mapped with bbmap.sh (BBTools) and predicted for protein-coding sequences (CDSs) using Prodigal v2.6.3 . Functional annotation was performed using eggNOG-mapper v2.1.7 with eggNOG database v5.0.2 and DIAMOND in --iteration mode . The annotation best hits were screened for genes acting on CH 4 and nitrogen transformation processes with core genes listed in based on gene names and KEGG (Kyoto Encyclopedia of Genes and Genomes) Orthology entries. Carbohydrate-active enZymes (CAZy) were identified through the CAZy database using dbCAN v3.0.7 with HMMER and DIAMOND . Putative CDSs with at least one positive hit were selected for further annotation with NCBI non-redundant nr database, Swiss-Prot-curated protein sequence database, and Protein Data Bank database ( pdbaa ) using DIAMOND v2.1.8.162 and blastx . Distant homologs identified as false positives were removed. In addition, the “.daa” outputs from DIAMOND alignments to the nr database were used to assign taxonomic classification through the MEGANIZER program based on the naïve lowest common ancestor algorithm in MEGAN6 . In addition, assembled contigs were binned with CONCOCT v1.0.0, metaBAT2 v2.12.1, and MaxBin2 v2.2.6 with default parameters and consolidated within metaWRAP v.1.3.2 to recover MAGs . Bin quality was determined with CheckM2 v1.0.2, and draft bins with more than 50% completeness and less than 10% contamination based on MIMAG were checked with MAGPurify v2.1.2 to remove incorrectly binned contigs . The draft MAGs were dereplicated with dRep v3.0.0 with MAGs passing the quality threshold taxonomically classified by the Genome Taxonomy Database toolkit (GTDB-Tk v2.1.1, R207 v2) and functionally annotated with Distilled and Refined Annotation of Metabolism (DRAM v1.4.6) . The detailed parameters used in the metagenomic analyses are available in the Supplementary Information S3. Statistical analyses and visualizations were conducted in R v4.3.1 with RStudio v.2023.03.1 . The R packages used were stats v4.3.1, phyloseq v1.44.0, and vegan v2.6–4 . Alpha diversity was estimated using Nonpareil sequence diversity ( N d ) based on rarefied coverage that combined richness and evenness to represent total diversity. The Nonpareil sequence diversity correlates with classic diversity indexes. Beta-diversity was analyzed with non-metric multidimensional scaling based on Bray-Curtis distances computed using the metaMDS function in the vegan package. The bi-plot was constructed with the envfit function to plot peat chemical properties and GHG measurements to the prokaryotic communities. The relative abundance of selected functional genes was quantified by mapping metagenomic reads to all predicted sequences, which were normalized with gene length to represent gene abundance within the microbial communities. Heatmaps for selected soil respiration genes, CAZymes, and COGs were constructed based on Z ‐score transformed data to improve normality and homogeneity of variances. Mantel test with Bray-Curtis distances using Spearman’s rank correlation was used to determine environmental variables and soil greenhouse gases correlation with microbial community composition and genes related to production and consumption of CH 4 and N 2 O. The ratio of mcrA to pmoA indicated the ratio between methanogenesis and CH 4 oxidation. Nitrification was compared to denitrification through the amoA to the sum of nirK and nirS ratio. The sum of nirK and nirS genes to nosZ and the ratio of norB to nosZ were used as indicators of denitrification-driven gaseous nitrogen loss potential. Data visualization was performed with ggplot2 v3.4.2, ComplexHeatmap v2.16.0, pheatmap v1.0.12, cowplot v1.1.1, and patchwork v1.1.2 R packages . Additional information for the data analyses can be found in the Supplementary Information S4. |
Fully Automated Artificial Intelligence Solution for Human Epidermal Growth Factor Receptor 2 Immunohistochemistry Scoring in Breast Cancer: A Multireader Study | 8c5fd8a7-0d51-4cbb-ae56-632ca37e1dd4 | 11485213 | Anatomy[mh] | The standard-of-care evaluation of human epidermal growth factor receptor 2 (HER2) in breast cancer includes immunohistochemistry (IHC) to assess protein overexpression and in situ hybridization (ISH) to determine gene amplification. ASCO and College of American Pathologists (CAP) published guidelines for HER2 testing—first in 2007 and updated in 2013, 2018, and 2023—that enhanced standardization of HER2 testing in clinical practice. Since the available HER2-targeted therapy was beneficial only to patients with HER2-positive disease, the testing guidelines provided recommendations for clearly distinguishing a negative from a positive result. The categorization of the HER2 testing result until now has therefore been essentially binary. CONTEXT Key Objective Can a fully automated artificial intelligence (AI)–based human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) scoring solution in breast cancer aid general surgical pathologists for consistent and accurate HER2 scoring in comparison with manual digital scores provided by expert breast pathologists. Knowledge Generated The HER2 AI solution could be applied irrespective of the laboratory performing HER2 IHC, the antibody, or the scanner used to generate whole-slide images of HER2 IHC slides. The AI solution demonstrated a standalone accuracy of 92.1% in comparison with HER2 scores of breast experts. Utilization of the HER2 AI solution by surgical pathologists significantly improved interobserver agreement in not only all HER2 scores but particularly distinction of HER2 0 from HER2 1+ cases. Relevance The performance of the HER2 AI solution supports consideration as a decision support tool to pathologists to improve HER2 scoring in routine clinical practice especially for optimal identification of HER2-low breast cancers. The results of the DESTINY-Breast 04 clinical trial reported by Modi et al showed the need to identify patients with low levels of HER2 protein expression and to distinguish 0 from 1+ scores. In that trial, patients with metastatic breast cancer that was HER2-negative, but with HER2 IHC results of 1+ or 2+ with negative ISH results, referred to as HER2-low breast cancer, showed significant improvement in survival after treatment with the antibody-drug conjugate fam-trastuzumab deruxtecan-nxki. The favorable results of the trial led to the drug's approval by the US Food and Drug Administration for the treatment of patients with HER2-low breast cancer. The drug's approval was also followed by the premarket approval of a HER2 IHC semiquantitative assay (Ventana PATHWAY anti-HER2/neu 4B5 rabbit monoclonal antibody on the BenchMark ULTRA instrument) for optimal identification of these patients. The most recent ASCO/CAP update of HER2 testing guidelines provides best practice recommendations for the distinction of HER2 0 from 1+, including evaluation of HER2 IHC at high-power magnification (×40) and seeking consensus review when needed. The subjectivity of manual interpretation of either light microscopic examination or digital whole-slide images (WSIs) of HER2 IHC and the challenges faced by the pathologists for recognition of breast cancers with low levels of HER2 protein overexpression are well recognized. The adoption of digital pathology has grown significantly in recent years, enabling the implementation of artificial intelligence (AI) tools to support triage of cases, primary diagnosis, and biomarker quantification. - Specifically, there is currently great interest in exploring computational image analysis using deep learning–based algorithms for objective categorization of HER2 IHC results, particularly to facilitate the identification of HER2-low breast cancers. We sought to evaluate the performance of a fully automated AI-based solution and to assess its potential utility in improving concordance of HER2 scoring and identification of HER2-low breast cancers in an international multicenter reader study. Key Objective Can a fully automated artificial intelligence (AI)–based human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) scoring solution in breast cancer aid general surgical pathologists for consistent and accurate HER2 scoring in comparison with manual digital scores provided by expert breast pathologists. Knowledge Generated The HER2 AI solution could be applied irrespective of the laboratory performing HER2 IHC, the antibody, or the scanner used to generate whole-slide images of HER2 IHC slides. The AI solution demonstrated a standalone accuracy of 92.1% in comparison with HER2 scores of breast experts. Utilization of the HER2 AI solution by surgical pathologists significantly improved interobserver agreement in not only all HER2 scores but particularly distinction of HER2 0 from HER2 1+ cases. Relevance The performance of the HER2 AI solution supports consideration as a decision support tool to pathologists to improve HER2 scoring in routine clinical practice especially for optimal identification of HER2-low breast cancers. Study Cohort The study cohort included hematoxylin and eosin (H&E)–stained and HER2 IHC slides of 120 patients with breast cancer from four pathology laboratories in three geographic regions, that is, the United States (1), France (1), and Israel (2). The cohort included randomly assigned retrospective cases from 2021 to 2022, and the required sample size for the study was calculated (Data Supplement, Methods). Each laboratory processed the slides on the basis of their institutional staining protocol using one of three different HER2 antibodies: 4B5 (Roche), HercepTest (Dako), and EP3 (Cell Marque). The HER2 IHC and corresponding H&E slides were scanned at 40× magnification on Philips UFS and Hamamatsu C13220 scanners and were fully anonymized. WSIs were obtained under ethics approval at each site by the local ethics committee or institutional review board with waiver of informed consent from the patients. The study cohort included invasive ductal carcinoma-not otherwise specified type (45%) and special types, including invasive lobular carcinoma (23%), tubular (6.7%), mucinous (6.7%), apocrine (6.7%), metaplastic (5%), adenoid cystic (3.3%), cribriform (2.5%), and secretory carcinomas (0.8%; Table ). The distribution of HER2 scores in the study cases, as reported in standard-of-care practice in the laboratories, was HER2 0, 40 (33%); HER2 1+, 38 (32%); HER2 2+, 24 (20%); and HER2 3 +, 18 (15%) slides. Algorithm Development The algorithm (Galen Breast HER2, Ibex Medical Analytics) was designed to support the interpretation and quantification of HER2 IHC on WSIs. The algorithm receives as input WSIs of HER2 IHC–stained tissue sections and runs six computational steps on each WSI (Fig ). It first detects the tissue fragments and then identifies the on-slide control (if present). In the third step, an AI algorithm identifies areas of interest—namely, invasive tumor regions. Within those regions, another AI model is used to detect the individual tumor cells and classify their HER2 IHC staining pattern according to membrane staining intensity and completeness (not stained, moderate incomplete, intense complete, etc). Finally, the algorithm calculates the slide-level HER2 score (0, 1+, 2+, or 3+) according to the cell counts and the ASCO/CAP 2018/2023 guidelines and generates contours and cell overlays to visualize its results, which are displayed to the user in the Galen slide viewer. The two main AI models—for invasive cancer detection and for cell classification—are based on multilayered convolutional neural networks that were specifically developed for image classification and object detection tasks, respectively. Additional details on the AI algorithm development are included in the Data Supplement. Study Design The prospective multireader study with crossover design included four general surgical pathologists as readers whose performance in the digital review of HER2 IHC on the WSIs without and with the aid of AI was compared with the ground truth (GT) provided by five expert breast pathologists (Data Supplement, Fig S1). The readers and GT experts reviewed HER2 IHC digitally and interpreted HER2 IHC scores according to the ASCO/CAP 2018/2023 guidelines. The study comprised two arms: in arm A, digital manual read , WSIs of HER2 IHC were reviewed and scored manually using a digital viewer and in arm B, AI-supported read , an-AI supported HER2 workflow arm, pathologists reviewed and scored the same WSIs of HER2 IHC using the AI solution. HER2 IHC slides were assigned in random order to each of the study reading pathologists (a pool of board-certified pathologists participating in the study with a range of 5-20 years of experience) and to the GT breast experts. GT was established by a team of five international expert breast pathologists from the United States (S.K. and S.J.S.) and Europe (E.C., R.C.-M., and A.V.-S.). The HER2 IHC scores reported by the experts constituted the GT for evaluating both the standalone performance of the AI algorithm and the utility of the AI solution as an ancillary aid to the four general pathologists. High-confidence GT was defined as the agreement of HER2 IHC scores by at least four of the five breast experts and was used for the analysis of the results of the study. Rates of agreement in HER2 scoring between each arm and GT were compared. All the study slides were assessed with the two modalities (arms) by the same reader, with a washout period of 2 weeks between the two reads. The readers assessed all the study slides in both arms and were blinded to the results of the other arm and to each other. The GT expert pathologists were blinded to each other's results and to the readers' scores. Statistical Analysis The agreement between the GT breast experts for each HER2 IHC score was determined, including the 95% CI. Similarly, the agreement between the readers, that is, inter-observer concordance, for each HER2 IHC score without and with the aid of AI was determined including the 95% CI for the entire cohort. In addition, inter-observer concordance forHER2 IHC scores 0 and 1+ according to GT, were also established. The mean agreement of the readers with the high-confidence GT for all scores and for distinction of HER2 0 from 1+ scores was evaluated. The standalone performance of the AI algorithm for each HER2 IHC score was determined by comparing the AI scores with the high-confidence GT scores. The study was not powered for HER2 0 versus 1+ or for calculating the accuracy per scanner or per antibody. Statistical analyses were performed using SAS v9.4 (SAS Institute, Cary, NC). Continuous variables were summarized using mean and standard deviation, and categorical variables by count and percentage. The required significance level of findings was P < .05. The study cohort included hematoxylin and eosin (H&E)–stained and HER2 IHC slides of 120 patients with breast cancer from four pathology laboratories in three geographic regions, that is, the United States (1), France (1), and Israel (2). The cohort included randomly assigned retrospective cases from 2021 to 2022, and the required sample size for the study was calculated (Data Supplement, Methods). Each laboratory processed the slides on the basis of their institutional staining protocol using one of three different HER2 antibodies: 4B5 (Roche), HercepTest (Dako), and EP3 (Cell Marque). The HER2 IHC and corresponding H&E slides were scanned at 40× magnification on Philips UFS and Hamamatsu C13220 scanners and were fully anonymized. WSIs were obtained under ethics approval at each site by the local ethics committee or institutional review board with waiver of informed consent from the patients. The study cohort included invasive ductal carcinoma-not otherwise specified type (45%) and special types, including invasive lobular carcinoma (23%), tubular (6.7%), mucinous (6.7%), apocrine (6.7%), metaplastic (5%), adenoid cystic (3.3%), cribriform (2.5%), and secretory carcinomas (0.8%; Table ). The distribution of HER2 scores in the study cases, as reported in standard-of-care practice in the laboratories, was HER2 0, 40 (33%); HER2 1+, 38 (32%); HER2 2+, 24 (20%); and HER2 3 +, 18 (15%) slides. The algorithm (Galen Breast HER2, Ibex Medical Analytics) was designed to support the interpretation and quantification of HER2 IHC on WSIs. The algorithm receives as input WSIs of HER2 IHC–stained tissue sections and runs six computational steps on each WSI (Fig ). It first detects the tissue fragments and then identifies the on-slide control (if present). In the third step, an AI algorithm identifies areas of interest—namely, invasive tumor regions. Within those regions, another AI model is used to detect the individual tumor cells and classify their HER2 IHC staining pattern according to membrane staining intensity and completeness (not stained, moderate incomplete, intense complete, etc). Finally, the algorithm calculates the slide-level HER2 score (0, 1+, 2+, or 3+) according to the cell counts and the ASCO/CAP 2018/2023 guidelines and generates contours and cell overlays to visualize its results, which are displayed to the user in the Galen slide viewer. The two main AI models—for invasive cancer detection and for cell classification—are based on multilayered convolutional neural networks that were specifically developed for image classification and object detection tasks, respectively. Additional details on the AI algorithm development are included in the Data Supplement. The prospective multireader study with crossover design included four general surgical pathologists as readers whose performance in the digital review of HER2 IHC on the WSIs without and with the aid of AI was compared with the ground truth (GT) provided by five expert breast pathologists (Data Supplement, Fig S1). The readers and GT experts reviewed HER2 IHC digitally and interpreted HER2 IHC scores according to the ASCO/CAP 2018/2023 guidelines. The study comprised two arms: in arm A, digital manual read , WSIs of HER2 IHC were reviewed and scored manually using a digital viewer and in arm B, AI-supported read , an-AI supported HER2 workflow arm, pathologists reviewed and scored the same WSIs of HER2 IHC using the AI solution. HER2 IHC slides were assigned in random order to each of the study reading pathologists (a pool of board-certified pathologists participating in the study with a range of 5-20 years of experience) and to the GT breast experts. GT was established by a team of five international expert breast pathologists from the United States (S.K. and S.J.S.) and Europe (E.C., R.C.-M., and A.V.-S.). The HER2 IHC scores reported by the experts constituted the GT for evaluating both the standalone performance of the AI algorithm and the utility of the AI solution as an ancillary aid to the four general pathologists. High-confidence GT was defined as the agreement of HER2 IHC scores by at least four of the five breast experts and was used for the analysis of the results of the study. Rates of agreement in HER2 scoring between each arm and GT were compared. All the study slides were assessed with the two modalities (arms) by the same reader, with a washout period of 2 weeks between the two reads. The readers assessed all the study slides in both arms and were blinded to the results of the other arm and to each other. The GT expert pathologists were blinded to each other's results and to the readers' scores. The agreement between the GT breast experts for each HER2 IHC score was determined, including the 95% CI. Similarly, the agreement between the readers, that is, inter-observer concordance, for each HER2 IHC score without and with the aid of AI was determined including the 95% CI for the entire cohort. In addition, inter-observer concordance forHER2 IHC scores 0 and 1+ according to GT, were also established. The mean agreement of the readers with the high-confidence GT for all scores and for distinction of HER2 0 from 1+ scores was evaluated. The standalone performance of the AI algorithm for each HER2 IHC score was determined by comparing the AI scores with the high-confidence GT scores. The study was not powered for HER2 0 versus 1+ or for calculating the accuracy per scanner or per antibody. Statistical analyses were performed using SAS v9.4 (SAS Institute, Cary, NC). Continuous variables were summarized using mean and standard deviation, and categorical variables by count and percentage. The required significance level of findings was P < .05. GT The overall interobserver agreement among the five expert breast pathologists who reviewed and scored the HER2 IHC WSIs was 72.4% (intraclass correlation coefficient, 0.86; 95% CI, 0.82 to 0.89), with complete agreement among all five for 53 of the 120 slides (44.2%). Four of the five GT breast pathologists agreed for another 39 (32.5%) slides (Fig A). Because of the high variability in expert agreement on HER2 scores, using simple majority might lead to unreliable GT (eg, for cases with three v two agreement), and thus, we decided to use high-confidence GT for most of the analyses. For HER2 scores 0, 1+, 2+, and 3+, experts' agreement (average of all GT pair agreement rates for each HER2 score, N = 120) was 80.1%, 65.9%, 69.2%, and 96.4%, respectively. The distribution of the HER2 IHC scores of the 92 (76.7%) slides for which at least four of the five expert breast pathologists agreed, defined as high-confidence GT, included 27 (29.3%) slides scored as HER2 0, 31 (33.7%) scored as 1+, 16 (17.4%) scored as 2+, and 18 (19.6%) scored as 3+ (Fig B). The HER2 IHC scores with high-confidence GT were used to determine the performance of the AI algorithm and that of the readers without and with the aid of the AI tool. For the other 27 (22.5%) slides, only three of the five breast pathologists agreed, and these slides were included only in part of the analyses. For these 27 slides with a weak majority agreement on HER2 score (low-confidence GT), the discrepancies were between 0 versus 1+ in 12 (44.4%) slides, 1+ versus 2+ in 12 (44.4%) slides, and 2+ versus 3+ in one (3.7%) slide (Fig C). One case had no majority agreement on GT and was excluded. The percentage of HER2 0+, 1+, 2+, and 3+ cases scored by each of the five experts is shown in Figure A. Readers' Performance The interobserver agreement of the four general surgical pathologists was variable for each HER2 IHC score and changed between the two arms. In arm A (digital manual read), the interobserver agreement was 75.3%, 61.6%, 63.0%, and 94.4% for HER2 0, 1+, 2+, and 3+, respectively, whereas in arm B (AI-supported read), the interobserver agreement was 92.5%, 72.5%, 53.6%, and 97.2%, respectively. Thus, the interobserver agreement increased from arm A to arm B for HER2 0, 1+, and 3+ scores, whereas it decreased for HER2 2+. The highest and statistically significant improvement was noted for HER2 IHC score 0 and for HER2 1+ slides. The percentage of HER2 0+, 1+, 2+, and 3+ cases scored by each of the four readers without and with the ancillary HER2 AI tool is shown in Figures B and C and the Data Supplement (Table S1). The smallest discordances in the percentage of cases were observed in the HER2 3+ category without and with AI (6.7% and 0.8%), followed by HER2 0 (9.2%; 6.7%). For HER2 1+ and 2+ cases, the percentage discordances were high (28.6% and 26.1% without AI; 23.5% and 24.4% with AI). The overall interobserver agreement of the four readers for the 119 study cases was 69.7% in arm A and 77.2% with the help of the HER2 AI tool in arm B (Fig A). The interobserver agreement of the four reader pathologists for the slides with high-confidence GT (n = 92) was 75% in arm A and 83.7% in arm B ( P < .05; Fig B). The pathologists' interobserver agreement significantly improved for the distinction of HER2 0 from 1+ cases with a high-confidence GT (n = 58) from 69.8% in arm A to 87.4% with the help of the AI in arm B (Fig C). The mean overall accuracy of the four readers compared with the high-confidence GT HER2 IHC scores (n = 92 slides) was 85.3% in arm A (digital manual read) and 88% in arm B (with AI ancillary aid; Fig D). The readers' accuracy for the distinction of HER2 0 from 1+ cases (n = 58 slides with high-confidence GT) increased from 81.9% in arm A to 88.8% with the support of the HER2 AI tool, in arm B, but did not achieve statistical significance (Fig E). Two examples in which the interobserver concordance improved with the aid of the AI solution are illustrated in Figure . HER2 AI Standalone Performance The standalone performance of the HER2 AI algorithm was determined for each HER2 IHC score comparing the automatic AI output scores with the high-confidence GT scores (n = 92). The accuracy of the HER2 AI tool was 100% for the HER2 3+ slides, followed by 92.6% for HER2 0 slides and 90.3% for HER2 1+. The lowest agreement was 87.5% for HER2 2+ slides. Overall, the accuracy of the HER2 AI tool was 92.1% (Data Supplement, Fig S2). The overall interobserver agreement among the five expert breast pathologists who reviewed and scored the HER2 IHC WSIs was 72.4% (intraclass correlation coefficient, 0.86; 95% CI, 0.82 to 0.89), with complete agreement among all five for 53 of the 120 slides (44.2%). Four of the five GT breast pathologists agreed for another 39 (32.5%) slides (Fig A). Because of the high variability in expert agreement on HER2 scores, using simple majority might lead to unreliable GT (eg, for cases with three v two agreement), and thus, we decided to use high-confidence GT for most of the analyses. For HER2 scores 0, 1+, 2+, and 3+, experts' agreement (average of all GT pair agreement rates for each HER2 score, N = 120) was 80.1%, 65.9%, 69.2%, and 96.4%, respectively. The distribution of the HER2 IHC scores of the 92 (76.7%) slides for which at least four of the five expert breast pathologists agreed, defined as high-confidence GT, included 27 (29.3%) slides scored as HER2 0, 31 (33.7%) scored as 1+, 16 (17.4%) scored as 2+, and 18 (19.6%) scored as 3+ (Fig B). The HER2 IHC scores with high-confidence GT were used to determine the performance of the AI algorithm and that of the readers without and with the aid of the AI tool. For the other 27 (22.5%) slides, only three of the five breast pathologists agreed, and these slides were included only in part of the analyses. For these 27 slides with a weak majority agreement on HER2 score (low-confidence GT), the discrepancies were between 0 versus 1+ in 12 (44.4%) slides, 1+ versus 2+ in 12 (44.4%) slides, and 2+ versus 3+ in one (3.7%) slide (Fig C). One case had no majority agreement on GT and was excluded. The percentage of HER2 0+, 1+, 2+, and 3+ cases scored by each of the five experts is shown in Figure A. The interobserver agreement of the four general surgical pathologists was variable for each HER2 IHC score and changed between the two arms. In arm A (digital manual read), the interobserver agreement was 75.3%, 61.6%, 63.0%, and 94.4% for HER2 0, 1+, 2+, and 3+, respectively, whereas in arm B (AI-supported read), the interobserver agreement was 92.5%, 72.5%, 53.6%, and 97.2%, respectively. Thus, the interobserver agreement increased from arm A to arm B for HER2 0, 1+, and 3+ scores, whereas it decreased for HER2 2+. The highest and statistically significant improvement was noted for HER2 IHC score 0 and for HER2 1+ slides. The percentage of HER2 0+, 1+, 2+, and 3+ cases scored by each of the four readers without and with the ancillary HER2 AI tool is shown in Figures B and C and the Data Supplement (Table S1). The smallest discordances in the percentage of cases were observed in the HER2 3+ category without and with AI (6.7% and 0.8%), followed by HER2 0 (9.2%; 6.7%). For HER2 1+ and 2+ cases, the percentage discordances were high (28.6% and 26.1% without AI; 23.5% and 24.4% with AI). The overall interobserver agreement of the four readers for the 119 study cases was 69.7% in arm A and 77.2% with the help of the HER2 AI tool in arm B (Fig A). The interobserver agreement of the four reader pathologists for the slides with high-confidence GT (n = 92) was 75% in arm A and 83.7% in arm B ( P < .05; Fig B). The pathologists' interobserver agreement significantly improved for the distinction of HER2 0 from 1+ cases with a high-confidence GT (n = 58) from 69.8% in arm A to 87.4% with the help of the AI in arm B (Fig C). The mean overall accuracy of the four readers compared with the high-confidence GT HER2 IHC scores (n = 92 slides) was 85.3% in arm A (digital manual read) and 88% in arm B (with AI ancillary aid; Fig D). The readers' accuracy for the distinction of HER2 0 from 1+ cases (n = 58 slides with high-confidence GT) increased from 81.9% in arm A to 88.8% with the support of the HER2 AI tool, in arm B, but did not achieve statistical significance (Fig E). Two examples in which the interobserver concordance improved with the aid of the AI solution are illustrated in Figure . The standalone performance of the HER2 AI algorithm was determined for each HER2 IHC score comparing the automatic AI output scores with the high-confidence GT scores (n = 92). The accuracy of the HER2 AI tool was 100% for the HER2 3+ slides, followed by 92.6% for HER2 0 slides and 90.3% for HER2 1+. The lowest agreement was 87.5% for HER2 2+ slides. Overall, the accuracy of the HER2 AI tool was 92.1% (Data Supplement, Fig S2). Here, we report the performance of a fully automated AI solution for HER2 IHC analysis that helped improve interobserver concordance and accuracy of HER2 scoring among general surgical pathologists, measured by agreement with expert breast pathologists. The AI solution was applicable across different laboratories, antibody clones, and scanners. Importantly, the solution significantly increased the interobserver agreement and accuracy of HER2 0 and 1+ scores, thereby demonstrating its potential role in improved identification of HER2-low breast cancers. The fully automated AI solution for HER2 IHC scoring reached a standalone accuracy of 92.1% compared with high-confidence GT scores. The AI solution was able to recognize invasive tumors with high precision, detect the HER2 expression pattern in individual invasive tumor cells, and provide HER2 IHC scores on the basis of the ASCO/CAP 2018/2023 guidelines. The AI tool helped the four general surgical pathologists to achieve significantly improved interobserver agreement and some improvements in agreement with high-confidence GT scores established by experts. The latter is of high importance as HER2 IHC interpretation can be very subjective, leading to relatively high intra- and interpathologist variability, even among experts. - Thus, in order to reach more reliable GT and results, in the current study, we decided to use five GT experts who reviewed all the cases and we defined high-confidence GT as a majority agreement of four of five experts. Similarly, each case was reviewed by four reader pathologists, so that the statistics on the performance in each arm were more robust (more power even if the study cohort was not large). The AI tool provided ancillary support that resulted in significantly improved interobserver agreement for 0 and 1+ HER2 scores, which, in previous studies, showed poor/moderate interobserver agreement. - A distinct advantage of the AI solution's robustness and the study was its diversity, including slides stained in diverse laboratories, with different HER2 antibodies and two different digital scanners used to generate the WSIs of HER2 IHC. Image analysis–based decision support tools for HER2 IHC scoring have been available for routine pathology practice in the past decade. - In addition, the CAP has published guidelines for safe incorporation of digital image analysis–based HER2 quantification. , Most of these tools were trained and validated to aid pathologists for accurate identification of HER2-positive (3+) breast cancers and for optimal identification of cases with 2+ scores to facilitate triaging for HER2 ISH. The utility of these digital aids to facilitate identification of HER2-low breast cancers is very limited as most of these tools were not developed to differentiate between HER2 0 and 1+ scores. In a recent study by Sode et al, the concordance between digital image reading and algorithm-assisted reading for identification of HER2-low cancers was only moderate. In recent years, AI solutions were developed to aid pathologists for optimal categorization of HER2 0 and 1+ scores. Some of these AI algorithms were developed to provide real-time decision support to pathologists as an augmented reality module attached to a light microscope. , These aids were designed to assist in HER2 scoring of regions of interest selected by a pathologist on light microscopic images and not for use on WSIs. Most of the reported AI solutions for HER2 scoring were developed to aid pathologists in the interpretation of digital WSIs of HER2 IHC, similar to our study. - Of note, unlike our study, previous AI solutions for HER2 scoring were not fully automated, requiring human intervention for selecting the region of interest. For example, the studies reported by Palm et al and Frey et al (in abstract form) required manual annotation of regions of interest in the invasive tumor for application of the AI algorithm. , While the report of an AI-powered HER2 analyzer by Jung et al (in abstract form) in their study of 209 HER2 WSIs suggests using the entire tumor, it is not clear if the AI tool recognized the invasive tumor for evaluation or if it required initial annotation. Similar to the AI solution validated in our study, Frey et al reported that their AI-based HER2 IHC quantifier software could be used across four institutions and five scanners, using different HER2 antibody clones at each site. None of the other reported AI solutions for HER2 scoring reported this advantage. Establishment of the standalone performance of the AI solution is important to estimate its potential to aid pathologists and to generate trust by pathologists. Standalone performance was reported only in one of the previous reports of AI solutions for HER2 scoring and in a previous abstract including an earlier version of the current AI solution. In the current study, the performance of the AI tool for HER2 0, 1+, 2+, and 3+ scores was 92.6%, 90.3%, 87.5%, and 100%, respectively, when compared with high-confidence GT HER2 scores. The two-arm design is another key aspect of the current study that allowed us to directly measure the impact of AI assistance on HER2 scoring by general surgical pathologists. We observed improvement in both interobserver agreement and accuracy after the use of the AI tool. Specifically, for cases whose GT was HER2 0 and 1+, that is, cases around the HER2-low boundary, the readers' accuracy increased from 81.9% to 88.8%. The current study has several limitations. While the number of cases reviewed by both expert GT and reader pathologists is significant (600 and 960 reads, respectively), which allowed us to reach reliable GT scores and to calculate statistically significant improvements in HER2 scoring by readers, the actual number of patients in the cohort is relatively small. Larger international multicenter validation studies are underway to further substantiate the results of our study and to gain new insights into the performance of the AI tool and its utility across different laboratories and scanners and for various cancer subtypes and patient subpopulations. The utility of the fully automated HER2 AI solution to improve interobserver concordance of HER2 scoring of breast experts was not evaluated in the current study. However, this is a subject of evaluation in our ongoing investigations and the results will be presented in subsequent reports. Future studies may also validate specific steps of the solution separately, for example, how well the cell detection model identifies each staining pattern. Following the results of the current study, an improved version of the AI tool is under development to address specific shortcomings of the algorithm, such as its performance in distinguishing between HER2 2+ and 1+ cases. This study demonstrated a significant potential of AI tools to improve HER2 scoring, which is essential for determining appropriate systemic therapy for patients with breast cancer. Specifically, in the HER2-low era and in the absence of other ancillary tests for differentiating HER2-low and ultra-low cases, AI solutions could be used as decision support tools for pathologists in standard-of-care pathology practice, enhancing the reproducibility and consistency of HER2 scoring, thus enabling optimal treatment pathways and better patient outcomes. |
Mapping local and distal effects of different neuropathologies on amygdala volume | a3372ba5-42d4-40cc-988b-61e49cc5f65a | 11712964 | Forensic Medicine[mh] | |
Who are CHWs? An ethnographic study of the multiple identities of community health workers in three rural Districts in Tanzania | 5453b7ca-c133-44a0-b380-a09b68f1ed76 | 6802175 | Preventive Medicine[mh] | Over the past two decades, official and scholarly discourse has viewed CWHs as a panacea for a range of vertical and health system challenges . A systematic review conducted by the World Health Organization (WHO) using eight in-depth country cases suggests that CHWs can potentially improve population level health through provision of maternal and child health services, case management of uncomplicated illnesses, and by engaging in preventive education on malaria, Tuberculosis (TB), HIV/AIDS and Non-Communicable Diseases (NCDs) . CHWs in South Africa have increased access to HIV/AIDS services and counselling by making them a focus in both health care centers and during homestead visits . However, as Uta Lehmann and David Sanders suggest in the WHO report titled, “Community Health Workers: What do we know about them?”, for CHWs to continue being effective in improving maternal and child health, they need to be carefully selected, trained, supervised, and supported . Community workers, as the authors suggest, need to be seen as part of the social and health system, “in which different actors are linked with each other in chains of relationships” . CHW programs continue to gain prominence with the increase of infectious and non-communicable diseases in urban contexts of developing countries. CHWs have been used in Kenya to address “neglected tropical diseases” such as schistosomiasis in informal urban settlements . Researchers in Kenya noted that due to CHWs’ local status, community members were more willing to participate in the research. CHWs also provided extensive information about the areas they worked in, which informed the treatment plan for the schistosomiasis project in Kenya . Similarly, in the wake of the Ebola outbreak in Sierra Leone, the United Nations Population Fund (UNFPA) and government trained CHWs practiced “contact tracing,” a method of “tracking contacts, or people linked to confirmed or probable Ebola cases” . Tapping into the close-knit infrastructure of local CHW networks, this method assisted in the early detection and rapid treatment of Ebola and has become a model for disease surveillance in different African countries . To address health systems challenges, improve formal health services utilization, and to target specific health challenges such as infectious and non-communicable diseases, program designers and countries have designed and experimented with different types of CHWs programs . CHW programs have generally been divided into generalists and specialist types. Generalist CWHs have mostly been volunteers used both by government and non-governmental programs to provide curative and educational services and assist in vaccination campaigns . Specialist CHWs often focus on specific areas such as tuberculosis care and malaria control . The level of training and areas of focus between CHW programs also vary . Some CHWs undergo a short training, typically lasting two weeks, and work in basic preventive and curative services . Others are trained for an extensive period of time and provide a wide range of preventive and curative services . Some are compensated for their work, have a strong clinical focus, and provide verbal and assisted referrals, while others work as unpaid volunteers . In 2014, the Tanzanian Ministry of Health approved a community-based health program (CBHP) policy guideline that called for a special type of a CHWs. This policy envisioned a special cadre of CHWs who would be locally selected and deployed. They would be trained based on a government approved curriculum and employed by the government. These CHWs would provide an integrated and comprehensive package of services including Reproductive, Maternal, Newborn, Child and Adolescent Health (RMNCAH), connect households to facility services, engage in preventive and curative services, and provide disease surveillance . While there is a rich body of literature on the vital role that CHWs play in health services delivery, there is a limited amount of literature on how CHWs’ personal and communal identities interact with their current professional roles and the implications this has on their work. CHWs in sub-Saharan Africa typically work in rural settings, where personal and professional roles are not easily differentiated. Often, these roles are performed simultaneously blurring the lines between the domestic and professional work . The current study builds on a limited but growing body of literature on the identity of CHWs—who are they?—and their intermediary role linking the community to the health care system . Several recent studies noted that CHWs’ personal and communal roles and identities interact with professional roles enhancing and sometimes impeding their health services delivery work . In their study on rural South Africa, Mlotshwa et al. argued that CHWs’ existing identities, as a community members or farmers, played an important role in their recruitment as CHWs, the services they delivered, and the level of trust they developed with their patients . In two studies dated 2015 and 2017 by Kok et al., the authors also confirmed these findings. The authors observed in their transition into a professional role as a health care worker, CHW’s “insider role” was positively evaluated by community members, who saw one of their “own” bringing in services and opportunities that did not exist before . However, CHWs professional role created tensions and frictions with the community. In some occasions, the CHWs were viewed with suspicion and seen as gatekeepers to vital external resources . Similarly, in a study by Mumtaz et al. the authors listed many barriers that faced lady health workers (LHW) in Pakistan. The authors noted that LHW faced conflicts in balancing their domestic and professional work . Domestic work such as collecting water or firewood, despite enhancing social relations and building trust among villagers, was not associated with the delivery of professional services. In Kok et al. study, the authors also showed that CHWs experienced tension and stress from their new roles because it created different expectations . For the community, CHWs represented prospect for more curative services. Politicians used the CHWs for their various political campaigns and meetings while health care workers expected that CHWs would advise more community members to attend formal health care centers . Other studies have examined CHWs’ use of time indirectly suggesting that the “official time” may mask other uses of time influenced by local identities and roles. In Tani et al. study in Tanzania, the authors examined CHWs’ use of time during their 8:00 AM to 4:00 PM work regimen, breaking down the type of services they provided and amount of time spent on each activity . However, the authors noted that while their study provided a useful picture of how CHWs spent their official time, it did not capture services and activities that were provided beyond non-official time. In this regard, CHW’s jobs do not fit neatly into the standard 07:30 AM to 5:30 PM work schedule of the Tanzanian government. CHWs may be called on to serve their communities at all times of day and night, at any place they meet the client. Like Mlotshwa et al. study, CHWs in Kilombero assisted in the escorting of pregnant mothers on their way to delivery, attended to accidents, bought supplies on behalf of their patients and re-visited patients for checkups after official work hours . These “non-professional” activities are not captured in the official time use and seem to ignore the role of pre-existing identities in motivating CHWs to provide services during off hours . In 2010, Connect, a project conducted under Ifakara Health Institute (IHI), introduced a full-time paid CHWs intervention program in three Tanzanian districts: Kilombero, Ulanga and Rufiji. Connect’s unique cadre of health workers were called community health agents. In Swahili, they were known as as Wawezeshaji wa Afya ya Jamii (WAJA). They were called agents (as opposed to “workers”) to emphasize their role in facilitating change towards healthy life and connections between the community and the formal health sector. Tanzania’s Ministry of Health and Social Welfare’s 2007–2010 Primary Health Services Development Program and the fourth Health Sector Strategic Plan of 2015–2020 both called for multi-purpose CHWs with standard training and a remuneration package . Connect’s aim was to test the Ministry of Health’s hypothesis that the deployment of paid, well-trained, multi-purpose CHWs with adequate health system support would accelerate the attainment of Millennium Development Goals (MDGs) 4 and 5, which aim to reduce child and maternal mortality . Connect used a community-based clustered randomized controlled trial research design (RCT) to test Mpango wa Maendeleo wa Afya ya Msingi (MMAM), Tanzania’s primary health services development program vision. The RCT design included 101 villages, and WAJAs were randomly assigned to 50 intervention sites. The Connect project was designed to increase greater community involvement in program operations by using residents as WAJAs and local leaders as their supervisors . The criteria for selecting WAJA required the candidates to be village residents, possessing a secondary education, and willing to serve their community . Selection of WAJAs was done openly in a village assembly. A total of 146 WAJAs in three cohorts from 50 villages in Kilombero, Ulanga and Rufiji were trained. Although WAJAs ranged in age from 18 to 45, most were young and were perceived by the community as youth . The villages advertised the position and the community voted to select its WAJAs. Once selected, WAJAs were trained for a period of nine months at IHI’s headquarters in Ifakara, Tanzania and then returned to their home villages. Their support system included three district intervention coordinators. It included also WAJA focal persons nominated by each district to oversee project’s activities and two supervisors at the village level to assist in community and medical advice. WAJAs work included curative services for uncomplicated cases, referrals to formal health centers and preventative health education (See Fig. ). WAJAs provided a roster of health services, including treatment for children under the age of five for uncomplicated malaria, pneumonia and diarrhea, referral of complicated cases to formal health centers as well as health education on family planning, safe motherhood, essential neonatal care, integrated management of child illness and basic hygiene (See Fig. ). WAJAs provided these services in their home villages (See Fig. ). Each village was typically assigned one male and one female WAJAs, however for larger villages in population and area, the number of WAJAs was increased to three WAJAs per village. Each WAJA served an average population of 2000 villagers. Every week, WAJAs created a work schedule that estimated 40 h of work. To ensure quality and the execution of work, the health facility supervisor and village supervisor roles were created to support the WAJAs. The two supervisors were given the WAJAs’ work schedule every week for review and to plan follow-ups. In many instances, the village supervisor accompanied the WAJAs on their household visits. For their services, WAJAs were paid an equivalent of $120 per month in comparison to formal health care workers at the health centers who received $300 per month. The cost of training one WAJA was $1348.21 . Village supervisors were unpaid. Study aim and research questions This study aimed to understand the community’s reception of community health workers in Connect intervention areas using ethnographic approach. We pursued three research question to understand the study’s main aim: (1) How are CHWs professional roles enacted? (2) Do professional roles interact with other CHWs roles? (3) If so, which ones and how? Understanding CHWs existing roles and identities, and how they interact with professional roles is vital in strengthening CHWs recruitment and motivation, program design, and success . Patient’s journey Conceptual framework Our conceptual model draws from the theory of multiple identities and open systems thinking . We first draw from the theory of multiple identities, which proposes that individuals have multiple overlapping identities which vary overtime and according to specific contexts. Furthermore, these multiple identities, such as being a WAJA and a son or daughter, sometimes exist in tension with each other as groups and individuals make demand on the actors . In return, the individuals negotiated these competing demands by deploying one or several roles and identities over the others . To situate how WAJA, as health providers, are part of a wider social system, we draw from open systems thinking on health . Systems thinking approaches health providers as part of a wider environment, which is not limited to material aspects but include economy, politics, gender norms and formal and informal institutions. These contexts exert force on the health care providers and vice versa. All aspects of the social system impact health care providers and vice versa as none of them work in isolation. While different institutions are separate from one another, it is important to recognize that the boundaries between them are porous and interdependent . Conceptual model for WAJAs’ multiples identities Accordingly, we viewed the WAJA’s role in the health care system following some of the ideas and principle espoused in the multiple identity and open system thinking. In our simplified model, see Fig. , we show that prior to the project’s arrival WAJAs inhabited various roles such as farmers and community members. We use farmers in our model as a placeholder for a range of “non-professional” money-making activities that WAJA engaged in addition to their salaried position. Although almost all WAJAs farmed their own land or were paid laborers in others’ fields at certain times of the year, many WAJAs also participated in less seasonal side work such as brick making, motorcycle-taxi driver, making palm oil, and fishing. We use community member as a label to signal a wide range of social, spatial and political affiliations such as being a village resident, a healer, religious and ethnic member or belonging to village-level associations such as village government, health committees or a dance group. WAJA identities consisted of both personal, communal and professional roles. Changes in WAJAs’ roles and identities did not remain static but varied with the changing contexts as our conceptual model shows. In open systems, when one-part changes, such as access to medicine, it influences other parts of the system: community’s evaluation of WAJA’s status as a professional. As the project progressed, WAJA’s professional role became amplified through access to medicine, salary, training and symbolic capital associated with a powerful project. Inversely, their identity as farmers and community members became diminished in comparison to their new role as WAJAs (See the second box in Fig. ). However, when medical supplies and salaries were delayed, the professional WAJA’s role again diminished. As the third box in Fig. shows, their personal and communal roles became more amplified than their professional roles as WAJA. This study aimed to understand the community’s reception of community health workers in Connect intervention areas using ethnographic approach. We pursued three research question to understand the study’s main aim: (1) How are CHWs professional roles enacted? (2) Do professional roles interact with other CHWs roles? (3) If so, which ones and how? Understanding CHWs existing roles and identities, and how they interact with professional roles is vital in strengthening CHWs recruitment and motivation, program design, and success . Conceptual framework Our conceptual model draws from the theory of multiple identities and open systems thinking . We first draw from the theory of multiple identities, which proposes that individuals have multiple overlapping identities which vary overtime and according to specific contexts. Furthermore, these multiple identities, such as being a WAJA and a son or daughter, sometimes exist in tension with each other as groups and individuals make demand on the actors . In return, the individuals negotiated these competing demands by deploying one or several roles and identities over the others . To situate how WAJA, as health providers, are part of a wider social system, we draw from open systems thinking on health . Systems thinking approaches health providers as part of a wider environment, which is not limited to material aspects but include economy, politics, gender norms and formal and informal institutions. These contexts exert force on the health care providers and vice versa. All aspects of the social system impact health care providers and vice versa as none of them work in isolation. While different institutions are separate from one another, it is important to recognize that the boundaries between them are porous and interdependent . Our conceptual model draws from the theory of multiple identities and open systems thinking . We first draw from the theory of multiple identities, which proposes that individuals have multiple overlapping identities which vary overtime and according to specific contexts. Furthermore, these multiple identities, such as being a WAJA and a son or daughter, sometimes exist in tension with each other as groups and individuals make demand on the actors . In return, the individuals negotiated these competing demands by deploying one or several roles and identities over the others . To situate how WAJA, as health providers, are part of a wider social system, we draw from open systems thinking on health . Systems thinking approaches health providers as part of a wider environment, which is not limited to material aspects but include economy, politics, gender norms and formal and informal institutions. These contexts exert force on the health care providers and vice versa. All aspects of the social system impact health care providers and vice versa as none of them work in isolation. While different institutions are separate from one another, it is important to recognize that the boundaries between them are porous and interdependent . Accordingly, we viewed the WAJA’s role in the health care system following some of the ideas and principle espoused in the multiple identity and open system thinking. In our simplified model, see Fig. , we show that prior to the project’s arrival WAJAs inhabited various roles such as farmers and community members. We use farmers in our model as a placeholder for a range of “non-professional” money-making activities that WAJA engaged in addition to their salaried position. Although almost all WAJAs farmed their own land or were paid laborers in others’ fields at certain times of the year, many WAJAs also participated in less seasonal side work such as brick making, motorcycle-taxi driver, making palm oil, and fishing. We use community member as a label to signal a wide range of social, spatial and political affiliations such as being a village resident, a healer, religious and ethnic member or belonging to village-level associations such as village government, health committees or a dance group. WAJA identities consisted of both personal, communal and professional roles. Changes in WAJAs’ roles and identities did not remain static but varied with the changing contexts as our conceptual model shows. In open systems, when one-part changes, such as access to medicine, it influences other parts of the system: community’s evaluation of WAJA’s status as a professional. As the project progressed, WAJA’s professional role became amplified through access to medicine, salary, training and symbolic capital associated with a powerful project. Inversely, their identity as farmers and community members became diminished in comparison to their new role as WAJAs (See the second box in Fig. ). However, when medical supplies and salaries were delayed, the professional WAJA’s role again diminished. As the third box in Fig. shows, their personal and communal roles became more amplified than their professional roles as WAJA. Settings and study population The data for this article comes from research conducted during two related projects in Kilombero district, Morogoro region. The main project was called Connect, a research intervention study designed to test the impact of using a paid cadre of CHWs that provided integrated maternal, newborn and child health service . CHWs also provided family planning services such as distributing condoms, refilling oral contraceptives, and providing education and referrals (for other family planning methods) in households. The second sub-project was known as Connect Family Planning, which began to operate in 2013. It aimed to contextualize the findings in the first project, which had shown that CHWs had a non-significant effect on contraceptive utilization after two years of their introduction . During the implementation of both studies from 2010 to 2013, CHWs retention was 98% (Kante, Almamy. Personal communication. Aug.21, 2019). The study population comes from rural and peri-urban areas. The residents of Kilombero District are mostly engaged in subsistence farming, cultivating crops like rice, maize, and cassava. Kilombero District is a religiously and ethnically heterogeneous area, populated by both Muslims and Christians . Common ethnic groups include farming tribes such as the Wapogoro, Ndamba, Kaguru, Wangoni, Wahehe as well as recent migrants such as the Sukuma, who are both farmers and pastoralists. At times, conflict would arise between farmers and pastoralists. During the study period, such a conflict led to the death of a police officer, siege of a police station, and having helicopters and other national reinforcements brought to the area . Data collection and analysis The data informing this article comes from two sources: (i) qualitative data in the form of interviews (IDIs) and focus discussions (FGDs) and (ii) ethnographic data in the form of observations and participation. The qualitative research was part of a larger study, which was registered through the International Standard Randomized Trial register with an award number ISRCTN96819844. The IDIs and FGDs were collected in two phases during March 2012 and during July 2013 from eight villages out of 50 intervention villages in Rufiji, Ulanga, and Kilombero Districts. Qualitative data collected during March 2012 came to be known as Qualitative Appraisal System 1 (QSA 1), while data collected during July 2013 came to be known as Qualitative Appraisal System 2 (QSA 2). Fewer interviews were conducted during QSA II because the aim was to track any changes in the specific themes rather than producing an exhaustive list of themes. The criteria for selecting the villages for QSA 1 and 2 factored in the size of the villages, numbers of WAJAs deployed, and information about health coverage and access. The aim of the data collection was to gain impressions from different stakeholders and perspectives involved in both the provision and receiving of health services. Researchers found saturation in the targeted themes: improvement of maternal and child health, referrals, medical supplies, and increased knowledge of MCH health in order to triangulate the quantitative data. From the village government, participants included Village Executive Officers (VEOs), village chairmen, traditional birth attendants, Village Health Workers, village supervisor, hamlet leaders and WAJAs. From the government and health care providers, the respondents included health care workers (doctors, clinicians and nurses) and members of the Community Health Management Team (CHMT). A total of 88 IDIs and 24 FGDs were conducted by native Swahili-speaking interviewers (see Table ). On average, each interview lasted for forty-five minutes to one hour and the FGD took between sixty and ninety minutes. The FGDs averaged 12 respondents, including women and men, categorized by age, gender and profession . An additional file has been included that shows more details on the questions administered (see Additional file ). Upon consent, interviews were audio recorded and then translated into English by experienced translators. In both rounds of qualitative data collection, the same interviewers were used for consistency. Prior to data collection, the interviewers were trained by a senior IHI researcher on research ethics and confidentiality as well as how to correctly phrase the interview and focus group guides. Community and district authorities assisted the researchers in identifying respondents from the Connect sites. The researchers had a list of positions in the community as well as categories. The local authorities would introduce the researchers to the appropriate individual occupying the requested position or a representative from the community for the requested categories. A list of the positions and categories of the respondents can be found in Table . The second source of data for this article comes from ethnographic research that involved observation and participation in WAJA’s professional, communal and personal activities. Two researchers were involved in the ethnographic study, a medical anthropologist completing his doctoral degree and a research assistant with a university degree. The ethnographic study occurred over a nine-month period from October 2013 to June 2014. It involved four villages in Kilombero District: Katindiuka, Lumemo, Mlabani and Kisawasawa. Researchers accompanied supervisors distributing supplies to the WAJAs, attended training of village health teams, observed mass meetings of WAJAs and district supervisors, reviewed WAJAs’ monthly reports with the supervisors, visited health centers and dispensaries, interviewed health workers at the centers and dispensaries, and interviewed community members about WAJAs’ services. Our research also included an observation period that entailed visiting six WAJA, four females and two males, three times a week for 12 weeks in their villages. We spent an average of six hours a day observing and participating in WAJA activities alternating between morning and evenings among the four villages. We kept a daily record of our observations in the form of field notes, and we discussed salient findings and topics at the end of each day. The ethnographic data focused on WAJA’s professional work including household visits, which entails case management, educational sessions on maternal and child health and family planning, referrals, patient check-up and consulting with supervisors and fellow WAJAs over the phone. The data also included WAJA’s “non-professional” money-making roles primarily farming but also brick making boda boda (motorcycle) taxi driving and the pressing of palms for oil. The other data was around their communal roles - attending meetings, prayers and community gathering such as funerals, weddings and baptisms. Prior to beginning the ethnographic research, we analyzed QSA 1 and 2 qualitative data to discover broad themes and topics related to the WAJAs’ reception in their own villages. Three team members were involved in reading the IDI and FGDs including one member who was part of the data collection in QSA 1 and 2 and two members who were not part of the data collection team. The initial analysis of the qualitative followed grounded theory procedure, an inductive research method that strives to generate concept within the data, privileging description over abstract categories and engaging in constant comparison between data sets . We pursued open coding to determine the frequency, similarities, relationships and contexts that shape WAJAs reception in the study area. Based on the general findings, we adopted categories such as lack of medicines, delays of salaries, kinship relations and income generation activities all of which emerged from the reading of IDIs and FGDs. This was achieved through writing memos, sharing notes and conducting discussion among the three researchers. We also recorded salient quotes and information from interviewees based on the general analysis of the IDIs and FGDs data. We used these first impressions, themes and topics to guide us but not to determine the scope of our ethnographic research. We repeated the same process for our ethnographic data. We pursued open coding to analyze our field notes by constantly reflecting on themes generated earlier from the qualitative data, noting frequency and similarity of themes, levels of detail, and relationships among the IDIs, FGDs and ethnographic fieldnotes. We wrote memos, shared notes and met to discuss to draw out the connections, similarities and different aspects of our data. At the axial coding stage, we engaged in an overall interpretation of data based on the patterns that emerged. This procedure verified earlier observations such as delay of medical supplies but also revealed a richer picture of the implications of such findings to WAJA’s professional role. Axial codes were combined to provide an explanatory framework of how WAJAs’ roles and identities blur and interact with implications to their work. At the final, integrative stage, we generated a working theory to explain how previous and new roles interacted positively and negatively on WAJAs’ work. Strengths and limitations of the study The strength of this study was the use of ethnographic approach to understand community’s reception of the introduction of a specific type of community health workers. Therefore, the researchers spent nine months observing the WAJA and were able to triangulate and contextualize the findings in QSA I and II. To minimize observer’s effect, researchers accompanied WAJAs on their pre-planned work schedules instead of creating activities for the researchers. The limitation of the current study is that the data used was collected five years ago. Several of the issues that undermined CHWs professional and personal identity such as delays of salaries or medical supplies may have changed or resolved. Another weakness is the study’s sample size particularly for observational data. The researcher observed only six WAJAs from four villages in Kilombero district, which means the issues and factors observed during the study were place and time-specific. The data for this article comes from research conducted during two related projects in Kilombero district, Morogoro region. The main project was called Connect, a research intervention study designed to test the impact of using a paid cadre of CHWs that provided integrated maternal, newborn and child health service . CHWs also provided family planning services such as distributing condoms, refilling oral contraceptives, and providing education and referrals (for other family planning methods) in households. The second sub-project was known as Connect Family Planning, which began to operate in 2013. It aimed to contextualize the findings in the first project, which had shown that CHWs had a non-significant effect on contraceptive utilization after two years of their introduction . During the implementation of both studies from 2010 to 2013, CHWs retention was 98% (Kante, Almamy. Personal communication. Aug.21, 2019). The study population comes from rural and peri-urban areas. The residents of Kilombero District are mostly engaged in subsistence farming, cultivating crops like rice, maize, and cassava. Kilombero District is a religiously and ethnically heterogeneous area, populated by both Muslims and Christians . Common ethnic groups include farming tribes such as the Wapogoro, Ndamba, Kaguru, Wangoni, Wahehe as well as recent migrants such as the Sukuma, who are both farmers and pastoralists. At times, conflict would arise between farmers and pastoralists. During the study period, such a conflict led to the death of a police officer, siege of a police station, and having helicopters and other national reinforcements brought to the area . The data informing this article comes from two sources: (i) qualitative data in the form of interviews (IDIs) and focus discussions (FGDs) and (ii) ethnographic data in the form of observations and participation. The qualitative research was part of a larger study, which was registered through the International Standard Randomized Trial register with an award number ISRCTN96819844. The IDIs and FGDs were collected in two phases during March 2012 and during July 2013 from eight villages out of 50 intervention villages in Rufiji, Ulanga, and Kilombero Districts. Qualitative data collected during March 2012 came to be known as Qualitative Appraisal System 1 (QSA 1), while data collected during July 2013 came to be known as Qualitative Appraisal System 2 (QSA 2). Fewer interviews were conducted during QSA II because the aim was to track any changes in the specific themes rather than producing an exhaustive list of themes. The criteria for selecting the villages for QSA 1 and 2 factored in the size of the villages, numbers of WAJAs deployed, and information about health coverage and access. The aim of the data collection was to gain impressions from different stakeholders and perspectives involved in both the provision and receiving of health services. Researchers found saturation in the targeted themes: improvement of maternal and child health, referrals, medical supplies, and increased knowledge of MCH health in order to triangulate the quantitative data. From the village government, participants included Village Executive Officers (VEOs), village chairmen, traditional birth attendants, Village Health Workers, village supervisor, hamlet leaders and WAJAs. From the government and health care providers, the respondents included health care workers (doctors, clinicians and nurses) and members of the Community Health Management Team (CHMT). A total of 88 IDIs and 24 FGDs were conducted by native Swahili-speaking interviewers (see Table ). On average, each interview lasted for forty-five minutes to one hour and the FGD took between sixty and ninety minutes. The FGDs averaged 12 respondents, including women and men, categorized by age, gender and profession . An additional file has been included that shows more details on the questions administered (see Additional file ). Upon consent, interviews were audio recorded and then translated into English by experienced translators. In both rounds of qualitative data collection, the same interviewers were used for consistency. Prior to data collection, the interviewers were trained by a senior IHI researcher on research ethics and confidentiality as well as how to correctly phrase the interview and focus group guides. Community and district authorities assisted the researchers in identifying respondents from the Connect sites. The researchers had a list of positions in the community as well as categories. The local authorities would introduce the researchers to the appropriate individual occupying the requested position or a representative from the community for the requested categories. A list of the positions and categories of the respondents can be found in Table . The second source of data for this article comes from ethnographic research that involved observation and participation in WAJA’s professional, communal and personal activities. Two researchers were involved in the ethnographic study, a medical anthropologist completing his doctoral degree and a research assistant with a university degree. The ethnographic study occurred over a nine-month period from October 2013 to June 2014. It involved four villages in Kilombero District: Katindiuka, Lumemo, Mlabani and Kisawasawa. Researchers accompanied supervisors distributing supplies to the WAJAs, attended training of village health teams, observed mass meetings of WAJAs and district supervisors, reviewed WAJAs’ monthly reports with the supervisors, visited health centers and dispensaries, interviewed health workers at the centers and dispensaries, and interviewed community members about WAJAs’ services. Our research also included an observation period that entailed visiting six WAJA, four females and two males, three times a week for 12 weeks in their villages. We spent an average of six hours a day observing and participating in WAJA activities alternating between morning and evenings among the four villages. We kept a daily record of our observations in the form of field notes, and we discussed salient findings and topics at the end of each day. The ethnographic data focused on WAJA’s professional work including household visits, which entails case management, educational sessions on maternal and child health and family planning, referrals, patient check-up and consulting with supervisors and fellow WAJAs over the phone. The data also included WAJA’s “non-professional” money-making roles primarily farming but also brick making boda boda (motorcycle) taxi driving and the pressing of palms for oil. The other data was around their communal roles - attending meetings, prayers and community gathering such as funerals, weddings and baptisms. Prior to beginning the ethnographic research, we analyzed QSA 1 and 2 qualitative data to discover broad themes and topics related to the WAJAs’ reception in their own villages. Three team members were involved in reading the IDI and FGDs including one member who was part of the data collection in QSA 1 and 2 and two members who were not part of the data collection team. The initial analysis of the qualitative followed grounded theory procedure, an inductive research method that strives to generate concept within the data, privileging description over abstract categories and engaging in constant comparison between data sets . We pursued open coding to determine the frequency, similarities, relationships and contexts that shape WAJAs reception in the study area. Based on the general findings, we adopted categories such as lack of medicines, delays of salaries, kinship relations and income generation activities all of which emerged from the reading of IDIs and FGDs. This was achieved through writing memos, sharing notes and conducting discussion among the three researchers. We also recorded salient quotes and information from interviewees based on the general analysis of the IDIs and FGDs data. We used these first impressions, themes and topics to guide us but not to determine the scope of our ethnographic research. We repeated the same process for our ethnographic data. We pursued open coding to analyze our field notes by constantly reflecting on themes generated earlier from the qualitative data, noting frequency and similarity of themes, levels of detail, and relationships among the IDIs, FGDs and ethnographic fieldnotes. We wrote memos, shared notes and met to discuss to draw out the connections, similarities and different aspects of our data. At the axial coding stage, we engaged in an overall interpretation of data based on the patterns that emerged. This procedure verified earlier observations such as delay of medical supplies but also revealed a richer picture of the implications of such findings to WAJA’s professional role. Axial codes were combined to provide an explanatory framework of how WAJAs’ roles and identities blur and interact with implications to their work. At the final, integrative stage, we generated a working theory to explain how previous and new roles interacted positively and negatively on WAJAs’ work. The strength of this study was the use of ethnographic approach to understand community’s reception of the introduction of a specific type of community health workers. Therefore, the researchers spent nine months observing the WAJA and were able to triangulate and contextualize the findings in QSA I and II. To minimize observer’s effect, researchers accompanied WAJAs on their pre-planned work schedules instead of creating activities for the researchers. The limitation of the current study is that the data used was collected five years ago. Several of the issues that undermined CHWs professional and personal identity such as delays of salaries or medical supplies may have changed or resolved. Another weakness is the study’s sample size particularly for observational data. The researcher observed only six WAJAs from four villages in Kilombero district, which means the issues and factors observed during the study were place and time-specific. The direct quotes included below come from participants in QSAI, QSAII and the ethnographic study period. We identified four main themes: Economic Activities and Kinship Ties; Merging Kinship Ties with Professional Relationships; Stakeholder Ownership; Access to Medicine as Professional Identity; and WAJAs Local Knowledge and Efforts to Provide Services. Pre-existing roles: economic activities and kinship ties In the ethnographic component of the study, five out the six WAJAs whom we observed were engaged in income generation activities outside official eight-hour per day, five days per week duties as WAJA. These included farming, brick-laying, extraction of palm oil and driving motorcycle taxis. These activities were done during both work and non-work hours. Female WAJAs preferred mostly small business such as palm oil extraction and farming rather than driving motorcycle taxis or brick making, which were socially prescribed as male occupations. WAJAs’ “non-professional” work increased during farming season (December to mid-May) and during the frequent delays in their salaries and medical supplies. The WAJAs informed us that these activities were undertaken to supplement their professional salaries and meet family and societal demands, such as the support of siblings and financing of weddings and funerals. Some of these activities, such as contractual farming (known as the Mraba system), were undertaken to supplement their salaries: At that time, I had not received my salary yet, honestly I was working as a WAJA for two days, and the rest I had to look for other work; like someone will offer me a piece of land (Mraba) to cultivate, and in the end he will pay me, so that when I go back home I can have a meal, and so that they [WAJA’s family] can see that I am working. It is not like permanent work. (Field Notes, Male WAJA from Kilombero District) WAJA interviewed stated that he exchanged his labor for the Mraba to supplement his income and fulfill familial obligations. The male WAJA noted that Mraba contracts were common in the planting season that begins in December and extends through late March in Kilombero District. Working as a contract farmer formed WAJAs personal and communal identity, which also connected them to the socio-economic sector of farming. This in turn enhanced their professional identity as health care workers. During rainy seasons, farmers move into their farms where they spent extended amount of time preparing the rice fields and planting. As farmers, WAJA knew about the state of roads during heavy rains and the difficulty of reaching communities that needed care in areas prone to flooding. During rainy seasons some roads were impassable even by cars. WAJA also acquired knowledge of the kinds of diseases and health challenges that different seasons brought. Farmers lived in the flooded fields and their drinking water at this time came from five-foot pits contaminated by the floods, which acerbated intestinal diseases like cholera and diarrhea. Health care workers also knew about these issues; however, because their services were provided in a fixed position, their knowledge about community norms and disease trends were not as in-depth and timely as that of WAJAs. WAJA used this knowledge to inform their work but also in assisting government-initiated campaigns against cholera and diarrhea. Merging of kinship ties with professional relationships The bonding between WAJA, project staff and the community was evident in the use of kinship terms to refer to each other. Community members, village government officers, health facility employees and Connect project staff members referred to WAJA using kinship terms like vijana (youth) WAJA wetu (our WAJA), (m) wanangu (my offspring) and watoto (children). In the following excerpt, a health facility supervisor expresses his positive working relations with his local WAJA: In general, we don’t have a problem with our WAJA [WAJA wangu], other [WAJA] can call us asking us if the client they have referred has reached the clinic. And we give him feedback on the situation (IDI, Male Health facility supervisor from Rufiji District) Health facility supervisor referred to WAJA using a kinship terms like WAJA wangu, an identity label commonly used in personal and community interactions. In turn, WAJA also called some project supervisors Mama WAJA, which means WAJA’s mother. Community members, project staff and WAJA themselves used identity labels that blurred the distinction between personal, community, and professional domains. WAJAs status as youth and kin members made villagers, especially youth, more comfortable and willing to ask them questions, attend their educational sessions and ask for contraceptives such as condoms. In our household and neighborhood visits with WAJAs, youngsters would regularly stop the WAJAs to ask for condoms and ask questions about sexual health. Access to medicine as professional credibility Interviews with WAJAs and observations of their interactions with clients revealed that when WAJAs had access to medicine they were seen as professionals, but when they do not have medicine they were treated with suspicion. In addition to providing health education, arranging referrals for complicated cerebral malaria cases and high-risk pregnancies, WAJAs also engaged in case management of mild malaria, pneumonia, and diarrhea. WAJAs provided free medication to mothers and children under five, visiting their client door-to-door. At times, WAJAs were able to dispense medicines even when local dispensaries and health clinics lacked the necessary supplies. Community members often refer to WAJAs as “street doctors” and female WAJA as “street nurses” because of their access to medicine. Sometimes, the label “street doctor” was used regardless of WAJA’s gender. In the following passage, a male WAJA from Rufiji explains why they are associated with being doctor-like: Because we can’t test them [villagers] we have to tell them to go to the pharmacy or health center to get tested. When they return with the result we give them ALU [ a common anti-malarial medication ] from the prescription they got from where they got tested, and most people do return to us and we give them treatment. When someone gets better s/he won’t say that WAJAs are not reliable, they believe that we are their doctors and that is the situation (IDI, Male WAJA from Rufiji District). WAJAs are not allowed to prescribe medicines without a proper diagnosis, and therefore, for example, ask their clients to get tested for malaria before receiving anti-malarials. Although WAJAs were supposed to be able to provide malaria testing through Rapid Diagnostic Test (RDT), during our observation period, their medical supplies were delayed, and most WAJAs had no RDT for diagnosis. In such cases, WAJAs referred their clients to the health facility for malaria tests and when the patients returned WAJA gave them medicine. Similarly, we found out that sometimes the health facility was also out of stock for essential medicines. The facility staff would refer their clients to private pharmacies. Instead of buying these medicines at the expensive private pharmacy, clients preferred to return to WAJA, where they could get the medicines for free. See Fig. for more details on reverse referral. Whether WAJA diagnosed or recommended referrals to the health center before providing medication, community members perceived WAJAs’ position as doctor-like: Secondly, the difficulty for me is that people have begun seeing me like a doctor. I am talking about what is happening in my village. As a WAJA, I know that I must give medicines in the required dosage in order to avert the pressing diseases that affect children under five. But because [villagers] know you are a health care worker and you have some of the medicines, they think they don’t need to go anywhere else and every disease that one has will be treated by the WAJA. (FGD, Male WAJA from Kilombero District) Here the WAJA reflected on villagers’ attempt to elevate him as a doctor because of the WAJA’s access to medicine. However, the quote also shows the apprehension WAJAs have about such a label. None of the WAJAs we observed endorsed being called a doctor; they often highlighted that community members conflated access to medicine and adherence to dosage with a doctor-like identity. The ability to prescribe medicine also enabled the WAJA’s professional role to deliver preventative services. And if WAJA were unable to provide medicine during their educative preventative house visits, their visit was often unwelcome. Due to challenges in the supply chain, WAJA often faced the challenge of needing to conduct their daily household visits to provide information without their curative services. WAJA’s reception in the house visits was very different during these times of curative shortages. A village supervisor in Rufiji observed: A certain belief has already been created in the client’s mind that if you are sick, the WAJA has medicine to cure you completely. And if a client visits the WAJA and he doesn’t have medicines, they question your [WAJA’s] purpose for visiting [the patient]. (IDI, Village Supervisor from Rufiji District) Questioning in this context means that the patient described his visit to WAJA as a waste of time. Despite still being able to provide helpful information on nutrition and referrals, WAJA’s professional roles were unwelcome without the curative services. WAJAs are perceived to have failed in their professional role of providing medicine to their clients. Consequently, WAJA’s role as a health worker was challenged. While WAJAs service mandate was to serve mothers and children under five, everyone in the community required their services. WAJA’s local knowledge and efforts to provide family planning services WAJA’s task of delivering family planning education and services to women and youth raised some moral and ethical questions from the community centering on issues of gender, reproduction and marital relations. WAJAs used their knowledge as locals to navigate this contentious terrain and provide personalized services to women who seek these services. Despite more than a decade of family planning education and services nationally, family planning remained a controversial topic in Tanzania including in Connect intervention areas. One of the female WAJAs from Mlabani, Kilombero began her response to an interview question by providing insight into the contentious nature of family planning, and how it intersects with prevailing gender and patriarchal norms: In providing family planning services, we are always eager to involve the husbands because you can secretly give [pills to the wives] and later, the husbands finds the pills. [The wives to defend themselves] would say that the lady [WAJA] has given me the pills. (IDI, Female WAJA from Kilombero District) This female WAJA from Kilombero observed that it is best to approach the issue of family planning education and services by including the husbands. When a husband comes to know that his wife uses family planning or receives family planning education without his consent, she noted, the WAJA often becomes an object of anger and insults. Some husbands interpreted the use of family planning as an indication of their wives’ promiscuity. Some husband reasoned their wives could use pills to prevent pregnancy from extra-marital affairs. The controversy surrounding family planning services was not confined to Kilombero area but extended to other Connect intervention areas such as Rufiji district. One of the female WAJAs interviewed from Mangwi Village in Rufiji District reported how she was confronted by an angry husband, whose wife did not tell him that she wished to use family planning services: I almost got in trouble but not entirely. There was a certain woman who wanted to start using family planning but she actually never met me … Some people directed her to my place and this news [that she is seeking family planning services] reached her husband. A fight broke out between the husband and the wife in their home. I had to call that woman to ask if there was a fight and she admitted that it happened. I told her if your husband doesn’t like it [family planning], I will have to come to your home to jointly inform you about the uses of family planning services and how it enables one to stop or resume pregnancy when one decides. (IDI, Female WAJA from Rufiji District) These two cases show that family planning remained a contentious subject, and women who wanted such services had to use them within constrained conditions. Males and particularly husbands were the main source of opposition for using family planning. Husbands’ anger was often directed toward their wives and the CHWs who were perceived to have offered these services. It happened several times that women enrolled in family planning services without their husband’s knowledge. In these cases, WAJAs approached their female clients as if they are “new” to the services: So if a woman has already secretly started the dose and her husband comes to know, he may think it’s her first time. There are women who start to use family planning [contraceptive pills] secretly by going to pharmacies and to the health centers without their husband’s knowledge—they go there early in the morning and return home un-noticed. So when you go to the likes of these clients you have to approach the whole situation as if you are persuading a new person to start family planning services. In fact, we are not allowed to give someone the pills; our work is to assist women who are already on the pills by continuing to support them. (IDI, Female WAJA from Kilombero) As locals, aware of the gender roles and patriarchal relations in the village, WAJA treated women who sought their services as new clients, meaning they respected their desire to use the services and minimize questions that probed into the women’s history of using contraceptives to make them feel more comfortable and less embarrassed. In our study, we observed that WAJAs employed several strategies to enable new and existing users of family planning to continue utilizing the services. WAJAs met their clients away from their homes to give them advice or refill their pills. In certain cases, WAJAs reported that they recommended that their clients seek family planning services from bi-monthly outreach medical camps with local clinicians and nurses. These outreach programs are conducted away from their homes, where clients can speak with skilled clinicians and nurses about various family planning methods in relative privacy. A female WAJA from Lumemo, who has planned and participated in several outreach programs in her village, shared her experiences of medical camps and how they try to accommodate both new and existing clients: Husbands are usually comfortable allowing their wives to attend the outreach because it is a norm that mothers with children attend clinics for checking their infants’ weights, getting vaccinations and receiving safe motherhood education. When clients come to these outreach areas, in addition to the clinic duties, we counsel them about family planning services, and we connect them to nurses and clinicians who give them more detailed information and prescribe pills for them. (IDI, Female WAJA from Kilombero District) During the outreach, the WAJAs interviewed explained that they provided information to their new clients about family planning, but also meet with other clients with whom they had prior arrangements. Again, although these negotiations underpin the personalization of care, this is achieved at the cost of having to accommodate local patriarchal, gender and social norms. WAJAs were also subjected to these gender and patriarchal roles with consequences to their professional work. There were differences in how community members received WAJAs and their services. In our observation, we found unmarried girls and married young women were more comfortable to speak to male WAJA about contraceptive use and refills than female WAJA. The general community stereotype was that women were gossipy and female WAJA’s professional role did not shield them from being perceived as gossipy. However, young males, both married and unmarried, were relatively comfortable in requesting condoms from both male and female WAJA. Male WAJA reported difficulty in speaking about family planning methods to older married female because they were being judged to be young and entering a domain of “women and sexual relations.” However, pregnant women and their relatives, in Kisawasawa for example, felt more comfortable to be escorted to the health centers at night by male WAJA than females. Villagers felt male WAJA could better protect them at night than female WAJA. In the ethnographic component of the study, five out the six WAJAs whom we observed were engaged in income generation activities outside official eight-hour per day, five days per week duties as WAJA. These included farming, brick-laying, extraction of palm oil and driving motorcycle taxis. These activities were done during both work and non-work hours. Female WAJAs preferred mostly small business such as palm oil extraction and farming rather than driving motorcycle taxis or brick making, which were socially prescribed as male occupations. WAJAs’ “non-professional” work increased during farming season (December to mid-May) and during the frequent delays in their salaries and medical supplies. The WAJAs informed us that these activities were undertaken to supplement their professional salaries and meet family and societal demands, such as the support of siblings and financing of weddings and funerals. Some of these activities, such as contractual farming (known as the Mraba system), were undertaken to supplement their salaries: At that time, I had not received my salary yet, honestly I was working as a WAJA for two days, and the rest I had to look for other work; like someone will offer me a piece of land (Mraba) to cultivate, and in the end he will pay me, so that when I go back home I can have a meal, and so that they [WAJA’s family] can see that I am working. It is not like permanent work. (Field Notes, Male WAJA from Kilombero District) WAJA interviewed stated that he exchanged his labor for the Mraba to supplement his income and fulfill familial obligations. The male WAJA noted that Mraba contracts were common in the planting season that begins in December and extends through late March in Kilombero District. Working as a contract farmer formed WAJAs personal and communal identity, which also connected them to the socio-economic sector of farming. This in turn enhanced their professional identity as health care workers. During rainy seasons, farmers move into their farms where they spent extended amount of time preparing the rice fields and planting. As farmers, WAJA knew about the state of roads during heavy rains and the difficulty of reaching communities that needed care in areas prone to flooding. During rainy seasons some roads were impassable even by cars. WAJA also acquired knowledge of the kinds of diseases and health challenges that different seasons brought. Farmers lived in the flooded fields and their drinking water at this time came from five-foot pits contaminated by the floods, which acerbated intestinal diseases like cholera and diarrhea. Health care workers also knew about these issues; however, because their services were provided in a fixed position, their knowledge about community norms and disease trends were not as in-depth and timely as that of WAJAs. WAJA used this knowledge to inform their work but also in assisting government-initiated campaigns against cholera and diarrhea. The bonding between WAJA, project staff and the community was evident in the use of kinship terms to refer to each other. Community members, village government officers, health facility employees and Connect project staff members referred to WAJA using kinship terms like vijana (youth) WAJA wetu (our WAJA), (m) wanangu (my offspring) and watoto (children). In the following excerpt, a health facility supervisor expresses his positive working relations with his local WAJA: In general, we don’t have a problem with our WAJA [WAJA wangu], other [WAJA] can call us asking us if the client they have referred has reached the clinic. And we give him feedback on the situation (IDI, Male Health facility supervisor from Rufiji District) Health facility supervisor referred to WAJA using a kinship terms like WAJA wangu, an identity label commonly used in personal and community interactions. In turn, WAJA also called some project supervisors Mama WAJA, which means WAJA’s mother. Community members, project staff and WAJA themselves used identity labels that blurred the distinction between personal, community, and professional domains. WAJAs status as youth and kin members made villagers, especially youth, more comfortable and willing to ask them questions, attend their educational sessions and ask for contraceptives such as condoms. In our household and neighborhood visits with WAJAs, youngsters would regularly stop the WAJAs to ask for condoms and ask questions about sexual health. Interviews with WAJAs and observations of their interactions with clients revealed that when WAJAs had access to medicine they were seen as professionals, but when they do not have medicine they were treated with suspicion. In addition to providing health education, arranging referrals for complicated cerebral malaria cases and high-risk pregnancies, WAJAs also engaged in case management of mild malaria, pneumonia, and diarrhea. WAJAs provided free medication to mothers and children under five, visiting their client door-to-door. At times, WAJAs were able to dispense medicines even when local dispensaries and health clinics lacked the necessary supplies. Community members often refer to WAJAs as “street doctors” and female WAJA as “street nurses” because of their access to medicine. Sometimes, the label “street doctor” was used regardless of WAJA’s gender. In the following passage, a male WAJA from Rufiji explains why they are associated with being doctor-like: Because we can’t test them [villagers] we have to tell them to go to the pharmacy or health center to get tested. When they return with the result we give them ALU [ a common anti-malarial medication ] from the prescription they got from where they got tested, and most people do return to us and we give them treatment. When someone gets better s/he won’t say that WAJAs are not reliable, they believe that we are their doctors and that is the situation (IDI, Male WAJA from Rufiji District). WAJAs are not allowed to prescribe medicines without a proper diagnosis, and therefore, for example, ask their clients to get tested for malaria before receiving anti-malarials. Although WAJAs were supposed to be able to provide malaria testing through Rapid Diagnostic Test (RDT), during our observation period, their medical supplies were delayed, and most WAJAs had no RDT for diagnosis. In such cases, WAJAs referred their clients to the health facility for malaria tests and when the patients returned WAJA gave them medicine. Similarly, we found out that sometimes the health facility was also out of stock for essential medicines. The facility staff would refer their clients to private pharmacies. Instead of buying these medicines at the expensive private pharmacy, clients preferred to return to WAJA, where they could get the medicines for free. See Fig. for more details on reverse referral. Whether WAJA diagnosed or recommended referrals to the health center before providing medication, community members perceived WAJAs’ position as doctor-like: Secondly, the difficulty for me is that people have begun seeing me like a doctor. I am talking about what is happening in my village. As a WAJA, I know that I must give medicines in the required dosage in order to avert the pressing diseases that affect children under five. But because [villagers] know you are a health care worker and you have some of the medicines, they think they don’t need to go anywhere else and every disease that one has will be treated by the WAJA. (FGD, Male WAJA from Kilombero District) Here the WAJA reflected on villagers’ attempt to elevate him as a doctor because of the WAJA’s access to medicine. However, the quote also shows the apprehension WAJAs have about such a label. None of the WAJAs we observed endorsed being called a doctor; they often highlighted that community members conflated access to medicine and adherence to dosage with a doctor-like identity. The ability to prescribe medicine also enabled the WAJA’s professional role to deliver preventative services. And if WAJA were unable to provide medicine during their educative preventative house visits, their visit was often unwelcome. Due to challenges in the supply chain, WAJA often faced the challenge of needing to conduct their daily household visits to provide information without their curative services. WAJA’s reception in the house visits was very different during these times of curative shortages. A village supervisor in Rufiji observed: A certain belief has already been created in the client’s mind that if you are sick, the WAJA has medicine to cure you completely. And if a client visits the WAJA and he doesn’t have medicines, they question your [WAJA’s] purpose for visiting [the patient]. (IDI, Village Supervisor from Rufiji District) Questioning in this context means that the patient described his visit to WAJA as a waste of time. Despite still being able to provide helpful information on nutrition and referrals, WAJA’s professional roles were unwelcome without the curative services. WAJAs are perceived to have failed in their professional role of providing medicine to their clients. Consequently, WAJA’s role as a health worker was challenged. While WAJAs service mandate was to serve mothers and children under five, everyone in the community required their services. WAJA’s task of delivering family planning education and services to women and youth raised some moral and ethical questions from the community centering on issues of gender, reproduction and marital relations. WAJAs used their knowledge as locals to navigate this contentious terrain and provide personalized services to women who seek these services. Despite more than a decade of family planning education and services nationally, family planning remained a controversial topic in Tanzania including in Connect intervention areas. One of the female WAJAs from Mlabani, Kilombero began her response to an interview question by providing insight into the contentious nature of family planning, and how it intersects with prevailing gender and patriarchal norms: In providing family planning services, we are always eager to involve the husbands because you can secretly give [pills to the wives] and later, the husbands finds the pills. [The wives to defend themselves] would say that the lady [WAJA] has given me the pills. (IDI, Female WAJA from Kilombero District) This female WAJA from Kilombero observed that it is best to approach the issue of family planning education and services by including the husbands. When a husband comes to know that his wife uses family planning or receives family planning education without his consent, she noted, the WAJA often becomes an object of anger and insults. Some husbands interpreted the use of family planning as an indication of their wives’ promiscuity. Some husband reasoned their wives could use pills to prevent pregnancy from extra-marital affairs. The controversy surrounding family planning services was not confined to Kilombero area but extended to other Connect intervention areas such as Rufiji district. One of the female WAJAs interviewed from Mangwi Village in Rufiji District reported how she was confronted by an angry husband, whose wife did not tell him that she wished to use family planning services: I almost got in trouble but not entirely. There was a certain woman who wanted to start using family planning but she actually never met me … Some people directed her to my place and this news [that she is seeking family planning services] reached her husband. A fight broke out between the husband and the wife in their home. I had to call that woman to ask if there was a fight and she admitted that it happened. I told her if your husband doesn’t like it [family planning], I will have to come to your home to jointly inform you about the uses of family planning services and how it enables one to stop or resume pregnancy when one decides. (IDI, Female WAJA from Rufiji District) These two cases show that family planning remained a contentious subject, and women who wanted such services had to use them within constrained conditions. Males and particularly husbands were the main source of opposition for using family planning. Husbands’ anger was often directed toward their wives and the CHWs who were perceived to have offered these services. It happened several times that women enrolled in family planning services without their husband’s knowledge. In these cases, WAJAs approached their female clients as if they are “new” to the services: So if a woman has already secretly started the dose and her husband comes to know, he may think it’s her first time. There are women who start to use family planning [contraceptive pills] secretly by going to pharmacies and to the health centers without their husband’s knowledge—they go there early in the morning and return home un-noticed. So when you go to the likes of these clients you have to approach the whole situation as if you are persuading a new person to start family planning services. In fact, we are not allowed to give someone the pills; our work is to assist women who are already on the pills by continuing to support them. (IDI, Female WAJA from Kilombero) As locals, aware of the gender roles and patriarchal relations in the village, WAJA treated women who sought their services as new clients, meaning they respected their desire to use the services and minimize questions that probed into the women’s history of using contraceptives to make them feel more comfortable and less embarrassed. In our study, we observed that WAJAs employed several strategies to enable new and existing users of family planning to continue utilizing the services. WAJAs met their clients away from their homes to give them advice or refill their pills. In certain cases, WAJAs reported that they recommended that their clients seek family planning services from bi-monthly outreach medical camps with local clinicians and nurses. These outreach programs are conducted away from their homes, where clients can speak with skilled clinicians and nurses about various family planning methods in relative privacy. A female WAJA from Lumemo, who has planned and participated in several outreach programs in her village, shared her experiences of medical camps and how they try to accommodate both new and existing clients: Husbands are usually comfortable allowing their wives to attend the outreach because it is a norm that mothers with children attend clinics for checking their infants’ weights, getting vaccinations and receiving safe motherhood education. When clients come to these outreach areas, in addition to the clinic duties, we counsel them about family planning services, and we connect them to nurses and clinicians who give them more detailed information and prescribe pills for them. (IDI, Female WAJA from Kilombero District) During the outreach, the WAJAs interviewed explained that they provided information to their new clients about family planning, but also meet with other clients with whom they had prior arrangements. Again, although these negotiations underpin the personalization of care, this is achieved at the cost of having to accommodate local patriarchal, gender and social norms. WAJAs were also subjected to these gender and patriarchal roles with consequences to their professional work. There were differences in how community members received WAJAs and their services. In our observation, we found unmarried girls and married young women were more comfortable to speak to male WAJA about contraceptive use and refills than female WAJA. The general community stereotype was that women were gossipy and female WAJA’s professional role did not shield them from being perceived as gossipy. However, young males, both married and unmarried, were relatively comfortable in requesting condoms from both male and female WAJA. Male WAJA reported difficulty in speaking about family planning methods to older married female because they were being judged to be young and entering a domain of “women and sexual relations.” However, pregnant women and their relatives, in Kisawasawa for example, felt more comfortable to be escorted to the health centers at night by male WAJA than females. Villagers felt male WAJA could better protect them at night than female WAJA. Using a mixture of IDI, FGD and ethnographic research, this article has examined the multiple, parallel roles of embedded CHWs (known as WAJAs). This examination demonstrates the complex ways in which the “professional” and “personal” identities of WAJAs interact, which has implications both for their work and their lives . WAJA’s participation in income generation enhances their professional identity In our study, we found that WAJAs had been engaged in a wide range of income-generating activities prior to becoming paid health workers and that many continued these activities. Their continuation in these activities was related to the temporary nature of the project and the need to meet both family and community obligations. Involvement in income activities primarily as farmers but also as brick makers and motorcycle drivers enabled WAJA to utilize local social networks, reinforced their community relations and afforded them knowledge of the lives of the communities they served . This contrasted with health care workers who were recruited from different parts of Tanzania; spoke different languages; operated from a fixed-point and served for short-time. WAJAs local identities thus enhanced WAJAs professional identity by enabling them to form trusting and mutual relationships and better understand the challenges of providing care to diverse populations within the village, such as farmers, pastoralists, youth and women. Our study also showed that personal and communal identities enhanced professional roles. By participating in the local economy such as farming, the WAJAs remained embedded in social, cultural and economic relationships that gave them a local identity and legitimacy. In this respect, our study supports the observation by Schneider et al. that showed CHWs’ position as “insiders” allowed them access to the community, converse in ways the villagers understood and therefore increased their efficacy to deliver services and feelings of being a professional health care provider. To be able to deliver their services, CHWs saw their knowledge and local network as an important means of doing their professional tasks . In our study, WAJAs’ involvement in socio-economic activities such as farming, for example, kept them attuned to weather and infrastructural conditions as well as the emergence of new health challenges, such as outbreaks of cholera and waterborne diseases . This knowledge enabled CHWs to offer services that factored local realities and conditions, which in turn made them sought after by various health agencies including national health campaigns. Kinship ties: integrating project into local relations Our findings on the importance of kinship relations with its implications to professional role is supported in other studies. In Kok et al. multi-country study, villagers trusted and felt comfortable to speak with community extension workers (HEWs) because they were “connected” to each other . Program designers paid particular attention to the community ties of their applicants when recruiting HEW . Other studies in Nigeria and South Africa also confirmed the centrality of kinship and their “insider” identity in positive and trusting relations built between CHW and their community . In our study, WAJAs were referred through kinship terms, which signaled their membership in the local community and their integral part in the existing social relationships. The use of kinship terms was so pervasive that the project staff such as the project supervisors also came to be referred through such terms. The supervisors, if a male, came to be known as Baba WAJA, meaning WAJA’s father and if she was a female, she came to be known as “Mama WAJA” which means WAJA’s mother. The use of kinship terms shows how personal and communal roles can interact with professional roles in creating trust and a sense of project’s ownership. Access to medicine and professional credibility Several studies have shown how both access and lack of access to medicine affects CHWs roles and identities . A study by Ajayi et al. conducted in southwest Nigeria, concerning community perception of CHW providing home-based malaria management, found that communities positively evaluated CHWs provision of malarial drugs, even stating that they were more effective and more accommodating than health care workers . In a multi-country study by Kok et al. detailing the benefits of CHWs as health intermediaries, the authors showed that in their case studies from Mozambique and Malawi community members valued their curative roles, which increased CHWs respect and recognition within the community . Observation and interviews with WAJAs and their clients showed that access to medicine enhanced WAJAs’ social and professional status and gave them credibility as health providers. Our study thus supports other scholars’ findings that communities valued CHW’s curative roles . Other studies have also shown the effects of irregular access to medicine and supplies on CHWs’ status . In the same study by Kok et al., the authors note that irregular medical supplies effected CHWs’ identity. When CHWs could not provide curative services because of either being overwhelmed with demand or lacking supplies, they felt stressed. At the same time, the community blamed and criticized them for not fulfilling their duties, which led some CHWs in Malawi to leave their homes . Our conclusion accords with the findings of these studies: a lack or irregular supply of medicine does indeed have an inverse effect on WAJA’s personal, communal and professional status. When WAJA had access to medicine, they were judged as professional; when they lacked medicine, they were described as “mere youth” or as “of no use.” WAJAs as knowledge agents: offering family planning in constrained settings WAJAs marshaled their socio-cultural and technical knowledge to provide culturally informed health services. Other studies have suggested that CHWs are knowledge agents, but have not adequately accounted for how “personal” identities and roles constitute a form of knowledge that CHWs draw upon . WAJA met their family planning clients in the streets or other convenient places, used respectful language, responded to questions and did follow-ups. They were able to do all this, it is clear, due to their community “embeddedness” . But this “embeddedness” also accentuated several identities, some that aligned with the WAJAs’ project-assigned roles and others that conflict with them. WAJAs approached family planning services by negotiating carefully between their personal and professional roles and larger socio-cultural forces. WAJAs were trained to offer family planning services to husband and wife together and publicly emphasize that using these services must be a joint decision. In a way, they were trying to change the discourse on family planning from being a women’s issue to a mutual decision involving couples . Such efforts have been widely documented in literature on family planning in Sub-Saharan Africa and Asia . However, this official practice must be balanced against the right of women to choose whether and when to have children. When these two directives clashed, WAJA aided women desiring family planning in starting and continuing contraceptive services, even if the community and their husbands disapproved. Sensitive programs such as family planning tended to highlight WAJA’s status as youth and their gender. Clients who opposed family planning often accused WAJA for being either too young or, if they were males, of the wrong gender to speak about women’s reproductive issues . Most of contention over family planning revolved arounds pills and come from husbands. As evidenced in the findings section above, WAJA frequently circumvented social and cultural structures such as gender, age and social status as well as opposition both from the community and from husbands in their effort to provide family planning services to women. This circumvention was necessary because the WAJA’s position as “youths” in the village undermines their ability to actively confront gender and patriarchal norms that inhibit women’s ability to obtain family planning services . In these cases, WAJA occupied conflicting roles: they were professional health workers with skills to meet clients’ needs, but also insiders with skills to avoid tensions between health workers and the community, arguably minimizing intra-household troubles that interfere with service initiation . WAJA employed strategies such as referring their clients to health retreats, meeting them away from their home and suggesting family planning services that are less “visible” to women’s partners, such as injectable contraceptives. While such actions are not easily maneuverable at health facilities, these were some of the avenues available for providing individual family planning services which WAJAs were often uniquely well-positioned to provide them . By using their socio-cultural knowledge to evade gender and social norms and provide family planning services to their clients, WAJAs both perpetuated existing norms and indirectly challenge them. As noted in other studies, family planning services and their promotion remains a contentious subject in many sub-Saharan and Asian countries . By seeking the consent of husbands, parents, and elders, WAJAs perpetuated the patriarchal order of social and spousal relations. In these moments, we can say that WAJA were operating in their personal roles as youths who are socialized to respect elders and adhere to prevailing social norms. Yet their work might also be interpreted as an effort to subvert the traditional order, as in many cases they helped new and existing clients to use family planning services without their husband’s or society’s approval. Medical anthropologists have shown that women’s medical and health decisions are not a straightforward affair, and often entail moral and ethical ambiguities . WAJA’s efforts to provide services to their clients expose these moral and ethical tensions. Gender and community norms and its implications to professional roles Several studies have pointed out how gender and age, and socio-economic status have implications for CHW’s professional roles . Studies by Bhutta et al. and Haq et al. pointed to the importance of considering the amount of work CHWs do in relation to the effects of existing gender-based social cultural roles, and how these affected CHWs ability to provide care . Focusing on female’s mobility as a frame of analysis, Mumtaz et al. demonstrated how women’s cultural and social identities affected the number of visits female CHWs made, the places they could visit, and the quality of services. Factors like prior relationship between CHWs and the community, gender and community social norms emerged as important interface through which patients communicated with community extension workers (HEW) as well as how they were evaluated. Because the Health Extension Workers (HEW) work aligned with provision of maternal and child health such as clean and safe deliveries, post-natal care and family planning, the authors noted that patients preferred to speak to female HEW because they associated maternal and child matters to issues related to women . The present study supplements these previous findings by showing how personal and professional identities interact with implications for professional and domestic life. In our study, we also found out that married and unmarried patients, depending on their age, preferred to discuss and access maternal and child health and family planning services from female CHWs, because it was culturally appropriate and related to what they viewed as women issues . However, younger women preferred to access and discuss contraceptive use with male WAJA because they felt less judged as promiscuous. With respect to young men aged 18–35, they were open to both female and male WAJA regarding family planning education and services such as condom refills. In our study, we found that WAJAs had been engaged in a wide range of income-generating activities prior to becoming paid health workers and that many continued these activities. Their continuation in these activities was related to the temporary nature of the project and the need to meet both family and community obligations. Involvement in income activities primarily as farmers but also as brick makers and motorcycle drivers enabled WAJA to utilize local social networks, reinforced their community relations and afforded them knowledge of the lives of the communities they served . This contrasted with health care workers who were recruited from different parts of Tanzania; spoke different languages; operated from a fixed-point and served for short-time. WAJAs local identities thus enhanced WAJAs professional identity by enabling them to form trusting and mutual relationships and better understand the challenges of providing care to diverse populations within the village, such as farmers, pastoralists, youth and women. Our study also showed that personal and communal identities enhanced professional roles. By participating in the local economy such as farming, the WAJAs remained embedded in social, cultural and economic relationships that gave them a local identity and legitimacy. In this respect, our study supports the observation by Schneider et al. that showed CHWs’ position as “insiders” allowed them access to the community, converse in ways the villagers understood and therefore increased their efficacy to deliver services and feelings of being a professional health care provider. To be able to deliver their services, CHWs saw their knowledge and local network as an important means of doing their professional tasks . In our study, WAJAs’ involvement in socio-economic activities such as farming, for example, kept them attuned to weather and infrastructural conditions as well as the emergence of new health challenges, such as outbreaks of cholera and waterborne diseases . This knowledge enabled CHWs to offer services that factored local realities and conditions, which in turn made them sought after by various health agencies including national health campaigns. Our findings on the importance of kinship relations with its implications to professional role is supported in other studies. In Kok et al. multi-country study, villagers trusted and felt comfortable to speak with community extension workers (HEWs) because they were “connected” to each other . Program designers paid particular attention to the community ties of their applicants when recruiting HEW . Other studies in Nigeria and South Africa also confirmed the centrality of kinship and their “insider” identity in positive and trusting relations built between CHW and their community . In our study, WAJAs were referred through kinship terms, which signaled their membership in the local community and their integral part in the existing social relationships. The use of kinship terms was so pervasive that the project staff such as the project supervisors also came to be referred through such terms. The supervisors, if a male, came to be known as Baba WAJA, meaning WAJA’s father and if she was a female, she came to be known as “Mama WAJA” which means WAJA’s mother. The use of kinship terms shows how personal and communal roles can interact with professional roles in creating trust and a sense of project’s ownership. Several studies have shown how both access and lack of access to medicine affects CHWs roles and identities . A study by Ajayi et al. conducted in southwest Nigeria, concerning community perception of CHW providing home-based malaria management, found that communities positively evaluated CHWs provision of malarial drugs, even stating that they were more effective and more accommodating than health care workers . In a multi-country study by Kok et al. detailing the benefits of CHWs as health intermediaries, the authors showed that in their case studies from Mozambique and Malawi community members valued their curative roles, which increased CHWs respect and recognition within the community . Observation and interviews with WAJAs and their clients showed that access to medicine enhanced WAJAs’ social and professional status and gave them credibility as health providers. Our study thus supports other scholars’ findings that communities valued CHW’s curative roles . Other studies have also shown the effects of irregular access to medicine and supplies on CHWs’ status . In the same study by Kok et al., the authors note that irregular medical supplies effected CHWs’ identity. When CHWs could not provide curative services because of either being overwhelmed with demand or lacking supplies, they felt stressed. At the same time, the community blamed and criticized them for not fulfilling their duties, which led some CHWs in Malawi to leave their homes . Our conclusion accords with the findings of these studies: a lack or irregular supply of medicine does indeed have an inverse effect on WAJA’s personal, communal and professional status. When WAJA had access to medicine, they were judged as professional; when they lacked medicine, they were described as “mere youth” or as “of no use.” WAJAs marshaled their socio-cultural and technical knowledge to provide culturally informed health services. Other studies have suggested that CHWs are knowledge agents, but have not adequately accounted for how “personal” identities and roles constitute a form of knowledge that CHWs draw upon . WAJA met their family planning clients in the streets or other convenient places, used respectful language, responded to questions and did follow-ups. They were able to do all this, it is clear, due to their community “embeddedness” . But this “embeddedness” also accentuated several identities, some that aligned with the WAJAs’ project-assigned roles and others that conflict with them. WAJAs approached family planning services by negotiating carefully between their personal and professional roles and larger socio-cultural forces. WAJAs were trained to offer family planning services to husband and wife together and publicly emphasize that using these services must be a joint decision. In a way, they were trying to change the discourse on family planning from being a women’s issue to a mutual decision involving couples . Such efforts have been widely documented in literature on family planning in Sub-Saharan Africa and Asia . However, this official practice must be balanced against the right of women to choose whether and when to have children. When these two directives clashed, WAJA aided women desiring family planning in starting and continuing contraceptive services, even if the community and their husbands disapproved. Sensitive programs such as family planning tended to highlight WAJA’s status as youth and their gender. Clients who opposed family planning often accused WAJA for being either too young or, if they were males, of the wrong gender to speak about women’s reproductive issues . Most of contention over family planning revolved arounds pills and come from husbands. As evidenced in the findings section above, WAJA frequently circumvented social and cultural structures such as gender, age and social status as well as opposition both from the community and from husbands in their effort to provide family planning services to women. This circumvention was necessary because the WAJA’s position as “youths” in the village undermines their ability to actively confront gender and patriarchal norms that inhibit women’s ability to obtain family planning services . In these cases, WAJA occupied conflicting roles: they were professional health workers with skills to meet clients’ needs, but also insiders with skills to avoid tensions between health workers and the community, arguably minimizing intra-household troubles that interfere with service initiation . WAJA employed strategies such as referring their clients to health retreats, meeting them away from their home and suggesting family planning services that are less “visible” to women’s partners, such as injectable contraceptives. While such actions are not easily maneuverable at health facilities, these were some of the avenues available for providing individual family planning services which WAJAs were often uniquely well-positioned to provide them . By using their socio-cultural knowledge to evade gender and social norms and provide family planning services to their clients, WAJAs both perpetuated existing norms and indirectly challenge them. As noted in other studies, family planning services and their promotion remains a contentious subject in many sub-Saharan and Asian countries . By seeking the consent of husbands, parents, and elders, WAJAs perpetuated the patriarchal order of social and spousal relations. In these moments, we can say that WAJA were operating in their personal roles as youths who are socialized to respect elders and adhere to prevailing social norms. Yet their work might also be interpreted as an effort to subvert the traditional order, as in many cases they helped new and existing clients to use family planning services without their husband’s or society’s approval. Medical anthropologists have shown that women’s medical and health decisions are not a straightforward affair, and often entail moral and ethical ambiguities . WAJA’s efforts to provide services to their clients expose these moral and ethical tensions. Several studies have pointed out how gender and age, and socio-economic status have implications for CHW’s professional roles . Studies by Bhutta et al. and Haq et al. pointed to the importance of considering the amount of work CHWs do in relation to the effects of existing gender-based social cultural roles, and how these affected CHWs ability to provide care . Focusing on female’s mobility as a frame of analysis, Mumtaz et al. demonstrated how women’s cultural and social identities affected the number of visits female CHWs made, the places they could visit, and the quality of services. Factors like prior relationship between CHWs and the community, gender and community social norms emerged as important interface through which patients communicated with community extension workers (HEW) as well as how they were evaluated. Because the Health Extension Workers (HEW) work aligned with provision of maternal and child health such as clean and safe deliveries, post-natal care and family planning, the authors noted that patients preferred to speak to female HEW because they associated maternal and child matters to issues related to women . The present study supplements these previous findings by showing how personal and professional identities interact with implications for professional and domestic life. In our study, we also found out that married and unmarried patients, depending on their age, preferred to discuss and access maternal and child health and family planning services from female CHWs, because it was culturally appropriate and related to what they viewed as women issues . However, younger women preferred to access and discuss contraceptive use with male WAJA because they felt less judged as promiscuous. With respect to young men aged 18–35, they were open to both female and male WAJA regarding family planning education and services such as condom refills. CHW’s involvement in productive activities such as farming kept them in tune with their community’s social rhythms, economic patterns, and common health risks. One immediate implication of this finding is the importance of designing a realistic work schedule for CHWs that better accommodates their seasonal income-generating activities and family obligations. Because CHW’s kinship and communal bonds preceded each intervention, continued during it, and are renegotiated with the beginning of a new intervention, project staff should be aware that these identities and roles will both hinder and facilitate certain goals of the project. If a project is targeting supplies or medications only to a particular group, such as mothers and children under five, project leaders should be aware that CHWs will be pressured by the community to provide medications to other members within the social network. In other words, CHWs are members of the village and have a sense of obligation to share and serve despite the specific nature of their project mandate. At present, the Tanzanian government is planning to scale the CHW program based on the Connect model . It is necessary that the new national CHWs model retains some aspects of previous community engagement. “Embeddedness” within the socio-economic life of their villages transforms CHWs into powerful agents for preventative health care, but may also encourage deference to local power structures and norms. Our study found that village residency requirements, village selection and village oversight successfully created invested stakeholders in the project. This sense of community ownership of CHWs and their work was evident in the kinship names community members often used to refer to them. CHWs also need supportive systems such as a reliable supply chain, supervision and mentorship, and a well-designed training package to be able to deliver services and maintain their status in the system (e.g. dispensaries) and above all in the communities they serve. All these resources are important in enabling them to gain positive identities, gain community acceptance, and provide the whole continuum of care from prevention to curative services and referrals. We suggest that program designers and the government should continue to pay CHWs for their services. If permanent arrangements are difficult to maintain, programs should be kept flexible and employers should be aware that CHWs may seek other sources of income to sustain their lives and fulfil their family obligations. Our research focused on CHWs as a category. A further study is needed to explore how gender, marital, nuptial and social economic status of CHWs effect their personal and professional roles with implication to their work. Additional file 1. IDI and FGD Questionnaires. The file contains questionnaires for conducting individual discussion interviews (IDIs) and focus group discussions (FGDs). |
A comparative evaluation of deep learning approaches for ophthalmology | 06e5f4a3-1a52-4909-b890-076f734f652c | 11410932 | Ophthalmology[mh] | The global rise of Artificial Intelligence (AI) shows no signs of slowing down . As AI technologies continue to advance, their potential to revolutionize various industries, including healthcare, is becoming increasingly apparent. Ophthalmology, in particular, stands to benefit significantly from AI advancements, which promise to enhance diagnostic accuracy, personalize treatment plans, and streamline the management of eye diseases. While not all AI is created equal , the industry is becoming increasingly consistent and organized. Key developments contributing to this include the establishment of reporting guidelines , , specialist guidance on the safe and effective adoption of AI , and government-led best practice initiatives . These efforts are crucial in ensuring that AI is integrated into ophthalmology in a manner that maximizes its benefits while minimizing potential risks. Additionally, the rise of drag-and-drop AI platforms has made AI more accessible to a broader audience, including users with varying levels of coding expertise. Transparency in AI development is also advancing, driven by the availability of open-source scripts and a growing number of publicly accessible datasets featuring various imaging modalities . These resources are essential for training machine learning (ML) algorithms, particularly in ophthalmology, where they aid in developing tools for detecting and classifying eye pathologies. A significant contribution to this transparency is the Papers with Code , which provides a collection of AI models and their implementations, along with benchmarks for different tasks . Ophthalmic imaging modalities and AI applications Ophthalmic imaging plays a critical role in diagnosing and monitoring eye diseases. Fundus photography and Optical Coherence Tomography (OCT) are two key modalities widely used in clinical practice. Fundus photography provides high-resolution images of the retina, aiding in the identification of various pathologies. OCT, on the other hand, creates high-resolution cross-sectional images of the retina, offering detailed visualization of retinal layers. These imaging techniques generate extensive datasets, which, when paired with corresponding diagnostic ground truths, serve as the foundation for training deep learning algorithms. Machine learning tasks in ophthalmology Several machine learning tasks can be performed on ophthalmic datasets, including classification, grading, heatmap generation, and quantization. For instance, deep learning algorithms trained on image data from fundus cameras and OCT scanners can predict pathologies such as glaucoma with high accuracy. Classification, in particular, has been well-documented, with Lily Peng et al. demonstrating high specificity and sensitivity in detecting Diabetic Retinopathy (DR) using the CNN InceptionV3 model on Eyepacs DR-graded images. Their AI model outperformed ophthalmologists in classifying the same dataset, underscoring the potential of AI in enhancing diagnostic accuracy. Similarly, Cecilia Lee et al. achieved high ROC values for Age-Related Macular Degeneration (AMD) using 2D OCT image slices. Other studies, such as those by Barros et al. , Singh et al. , and Jiang et al. , have successfully used CNNs to classify glaucoma and other pathologies like optic disc edema, papillitis, and ARMD, with performance comparable to that of board-certified ophthalmologists. While classification of 2D OCT slices is achievable with traditional classifiers, training on entire 3D OCT scans is expected to yield better results. This can be accomplished using 3D CNNs or transformers, which can process 3D volumes. However, a challenge with training 3D CNNs is the need to downsize the resolution to fit GPU memory, often requiring a low batch number. Heatmaps are another important tool in ophthalmic AI applications. During inference, heatmaps can reveal the regions of an image that the classifier deemed important when making its decision. For example, in glaucoma classification, a heatmap might highlight the optic disc region. Heatmaps are part of Explainable Artificial Intelligence (XAI), which aims to make machine learning decisions more interpretable . Quantization is also of interest to ophthalmic researchers, as it allows the reduction of a trained classifier model’s size, enabling deployment on small devices like smartphones. Quantization involves converting model parameters from floating-point to integer values, which not only shrinks the model but also speeds up computations by using integer operations. Ophthalmic image data is typically labeled by class, such as DR vs. normal or glaucoma vs. normal. However, some datasets, like the publicly available OCT2017 dataset, contain images labeled with multiple pathologies, including Diabetic Macular Edema (DME), Cytomegalovirus Retinitis (CMV), and Drusen. Additionally, some datasets, like Eyepacs, offer images labeled by grade rather than by class. Training models on graded data can be more advantageous, as demonstrated by Yijin Huang , who achieved superior results using Mean Squared Error (MSE) loss compared to cross-entropy loss when training on Eyepacs data. AI architectures: CNNs and transformers The architectures discussed in this paper fall into two primary categories: Convolutional Neural Networks (CNNs) and transformers. CNNs are specifically designed for image processing and are commonly used in tasks requiring classification into multiple categories, such as ‘normal’, ‘glaucoma’, or ‘DR’. CNNs leverage convolutions to identify image features using spatially aware filters, learning structured representations that enable accurate categorization or grading. Notably, traditional CNNs have been employed in 2D OCT slice classification, but the potential for improved results lies in training on entire 3D OCT volumes. This can be achieved with 3D CNNs, which use 3D convolutions, although this approach requires careful management of GPU memory due to the higher data demands. Transformers, originally developed for Natural Language Processing (NLP) tasks, have recently been adapted for image-related tasks with considerable success . Unlike CNNs, transformers can capture long-term dependencies within an image, identifying non-local correlations that CNNs may overlook. This capability has led to transformers outperforming CNNs in image classification tasks, as evidenced by their superior performance in ImageNet rankings . One reason for their better performance is that transformers can see long-term dependencies within an image as non-local correlations of objects, which are often ignored by CNNs . Moreover, transformers have shown versatility across various data formats, such as images, video, sound, and text, making them particularly promising for multimodal applications in ophthalmology . Recent advancements have also led to the development of hybrid models that combine CNNs and transformers, aiming to leverage the strengths of both architectures. For instance, a hybrid approach can use CNNs for local feature extraction and transformers for capturing global context, leading to enhanced performance in tasks like retinal disease classification . Given these advancements, this study investigates the performance of both CNN and transformer architectures in ophthalmic applications, utilizing public and private datasets that represent various ophthalmic modalities. The performance of these architectures is evaluated not only by accuracy but also by factors such as training time, quantization efficiency, and the ability to generate interpretable heatmaps. The ultimate goal of this paper is to identify the most effective AI models for diagnosing and managing eye diseases, thereby advancing the field of ophthalmology. Ophthalmic imaging plays a critical role in diagnosing and monitoring eye diseases. Fundus photography and Optical Coherence Tomography (OCT) are two key modalities widely used in clinical practice. Fundus photography provides high-resolution images of the retina, aiding in the identification of various pathologies. OCT, on the other hand, creates high-resolution cross-sectional images of the retina, offering detailed visualization of retinal layers. These imaging techniques generate extensive datasets, which, when paired with corresponding diagnostic ground truths, serve as the foundation for training deep learning algorithms. Several machine learning tasks can be performed on ophthalmic datasets, including classification, grading, heatmap generation, and quantization. For instance, deep learning algorithms trained on image data from fundus cameras and OCT scanners can predict pathologies such as glaucoma with high accuracy. Classification, in particular, has been well-documented, with Lily Peng et al. demonstrating high specificity and sensitivity in detecting Diabetic Retinopathy (DR) using the CNN InceptionV3 model on Eyepacs DR-graded images. Their AI model outperformed ophthalmologists in classifying the same dataset, underscoring the potential of AI in enhancing diagnostic accuracy. Similarly, Cecilia Lee et al. achieved high ROC values for Age-Related Macular Degeneration (AMD) using 2D OCT image slices. Other studies, such as those by Barros et al. , Singh et al. , and Jiang et al. , have successfully used CNNs to classify glaucoma and other pathologies like optic disc edema, papillitis, and ARMD, with performance comparable to that of board-certified ophthalmologists. While classification of 2D OCT slices is achievable with traditional classifiers, training on entire 3D OCT scans is expected to yield better results. This can be accomplished using 3D CNNs or transformers, which can process 3D volumes. However, a challenge with training 3D CNNs is the need to downsize the resolution to fit GPU memory, often requiring a low batch number. Heatmaps are another important tool in ophthalmic AI applications. During inference, heatmaps can reveal the regions of an image that the classifier deemed important when making its decision. For example, in glaucoma classification, a heatmap might highlight the optic disc region. Heatmaps are part of Explainable Artificial Intelligence (XAI), which aims to make machine learning decisions more interpretable . Quantization is also of interest to ophthalmic researchers, as it allows the reduction of a trained classifier model’s size, enabling deployment on small devices like smartphones. Quantization involves converting model parameters from floating-point to integer values, which not only shrinks the model but also speeds up computations by using integer operations. Ophthalmic image data is typically labeled by class, such as DR vs. normal or glaucoma vs. normal. However, some datasets, like the publicly available OCT2017 dataset, contain images labeled with multiple pathologies, including Diabetic Macular Edema (DME), Cytomegalovirus Retinitis (CMV), and Drusen. Additionally, some datasets, like Eyepacs, offer images labeled by grade rather than by class. Training models on graded data can be more advantageous, as demonstrated by Yijin Huang , who achieved superior results using Mean Squared Error (MSE) loss compared to cross-entropy loss when training on Eyepacs data. The architectures discussed in this paper fall into two primary categories: Convolutional Neural Networks (CNNs) and transformers. CNNs are specifically designed for image processing and are commonly used in tasks requiring classification into multiple categories, such as ‘normal’, ‘glaucoma’, or ‘DR’. CNNs leverage convolutions to identify image features using spatially aware filters, learning structured representations that enable accurate categorization or grading. Notably, traditional CNNs have been employed in 2D OCT slice classification, but the potential for improved results lies in training on entire 3D OCT volumes. This can be achieved with 3D CNNs, which use 3D convolutions, although this approach requires careful management of GPU memory due to the higher data demands. Transformers, originally developed for Natural Language Processing (NLP) tasks, have recently been adapted for image-related tasks with considerable success . Unlike CNNs, transformers can capture long-term dependencies within an image, identifying non-local correlations that CNNs may overlook. This capability has led to transformers outperforming CNNs in image classification tasks, as evidenced by their superior performance in ImageNet rankings . One reason for their better performance is that transformers can see long-term dependencies within an image as non-local correlations of objects, which are often ignored by CNNs . Moreover, transformers have shown versatility across various data formats, such as images, video, sound, and text, making them particularly promising for multimodal applications in ophthalmology . Recent advancements have also led to the development of hybrid models that combine CNNs and transformers, aiming to leverage the strengths of both architectures. For instance, a hybrid approach can use CNNs for local feature extraction and transformers for capturing global context, leading to enhanced performance in tasks like retinal disease classification . Given these advancements, this study investigates the performance of both CNN and transformer architectures in ophthalmic applications, utilizing public and private datasets that represent various ophthalmic modalities. The performance of these architectures is evaluated not only by accuracy but also by factors such as training time, quantization efficiency, and the ability to generate interpretable heatmaps. The ultimate goal of this paper is to identify the most effective AI models for diagnosing and managing eye diseases, thereby advancing the field of ophthalmology. The proposed method is divided into three broad categories: Fundus image, 2D OCT image, and 3D OCT volume-based classifiers. Within the Fundus image-based classification, further evaluation and analysis is performed based on the type of classifier required for the targeted pathology, such as a multiclass classifier for detecting DR (e.g., DR vs Healthy) and grading classifier for exclusively grading the pathology (e.g., classification into different grades of DR ranging from healthy to severe DR). Classification of fundus images In the classification of fundus retinal images, two distinct types of classifiers hold prominence: the multiclass classifiers and the grading classifiers. These two classifiers are extensively expounded upon in the subsequent subsections, offering valuable insights into their diverse applications and significance in the field. Multiclass classifiers Datasets In order to determine the best-performing classification architectures for fundus images, we utilized four publicly available datasets containing pathologies such as diabetic retinopathy (DR) and glaucoma. Specifically, we employed the Eyepacs dataset , which includes retinal images categorized into four different grades of diabetic retinopathy. Grade 0 comprises 25810 images, grade 1 comprises 2443 images, grade 2 comprises 5292 images, and grade 3 comprises 873 images. The three grades were merged into a single DR class for the classification task, while healthy images were placed in the normal class. The Messidor dataset contains 1200 images related to diabetic retinopathy, consisting of 788 normal and 172 DR images. This dataset also includes an exclusive test set comprising 182 normal and 54 DR images. The Messidor-2 dataset is another collection of DR-related images, featuring 1368 normal images and 380 DR images. The ACRIMA dataset is a glaucoma dataset comprising 309 normal and 396 glaucomatous images. Architectures A promising architecture for ophthalmic tasks needs to consider several factors. These factors include the accuracy of the architecture when training on fundus datasets, the speed at which the model can be trained, the model’s ability to be trained on small datasets, the size of the model to fit on small ophthalmic imaging/triaging devices, and the ability to create heatmaps from the model. Different factors may hold varying levels of importance for specific ophthalmic tasks, but overall accuracy is generally considered the most important parameter. The architectures we examined were the best-performing ones described in Papers with Code . In cases where Papers with Code did not show fundus image datasets for a specific eye pathology, we selected architectures based on the best performers in the Papers with Code ImageNet leaderboard . The top performers included ViT, EfficientNet, VOLO, Beit, and RegNet. Additionally, we included InceptionV3 from Lily Peng’s 2016 paper for historical reasons, even though it did not rank as a top performer. All the architectures were pre-trained with the ImageNet dataset . Image augmentation techniques, such as modifying contrast, aspect ratio, flipping, and brightness, were also utilized during training to reduce overfitting. The architectures used in the study included mainly transformers, CNN, or a combination of both. Pure transformers such as the ViT classifier, a scaled vision transformer , , were utilized to extend NLP for images using patching to reduce the sequence size. This approach is known for achieving top performance on ImageNet. The GitHub vit-keras codebase was employed with an image size of 384x384. Additionally, the transformer VOLO was tested. VOLO uses fine-level features or tokens that are often overlooked in self-attention methods. The Keras CV Attention GitHub repository (KCAC) was used for the implementation, with the VOLOd5 variant and an image size of 224x224. BEIT , another transformer architecture, tokenizes the image and applies masks to patches. The study utilized the KCAC variant BeitBasePatch16 with an image size of 224. DaViT is a simple visual transformer that captures global context by leveraging self-attention mechanisms with both spatial tokens and channel tokens. In this study, the KCAC variant DaViTS was used, with an image size of 224x224. Unlike the previously described architectures, CotNet is a hybrid of transformer and CNN that utilizes convolutions and employs attention on 2D feature maps. It utilized the KCAC variant CotNetSE152D with an image size of 320. Another CNN/transformer hybrid, CoAtNet , utilizes depthwise convolution and self-attention. The KCAC variant CoAtNet0 was used with an image size of 224x224. ResNeSt Split-Attention Networks is another CNN/transformer that combines CNN with attention mechanisms. It features split attention, which enables cross-feature interactions. The variant used is ResNest269 with KCAC , and the image size is 416x416. MLP-Mixer has an unconventional architecture as it is neither a CNN nor a transformer. It is a multilayer perceptron with no CNNs, transformers, or attention mechanisms. The implementation uses the MLPMixerL16 variant with KCAC and an image size of 224x224. The rest of the architectures mentioned are CNNs. RegNet , which belongs to the ResNeSt family, is a CNN with shortcuts to prevent vanishing gradients. Unlike other models, RegNet features a regulator module for improved complementary features. The variant used is RegNetZ with KCAC , and the image size is 256x256. Additionally, Normalizer-Free ResNeSts is a CNN that eliminates batch normalization, which can be computationally expensive. It uses the NFNetF2 variant with KCAC and an image size of 352x352. InceptionV3 is a classic CNN that utilizes factorized convolutions, wherein multiple filters are applied simultaneously to a channel, and label smoothing, which compensates for errors in ground truths. The implementation was carried out using Tensorflow , with an image size of 299x299. EfficientNet is a CNN that determines the width, resolution, and depth of a CNN through ‘compound scaling’, adjusting the width and depth for a given resolution rather than doing so arbitrarily. It employs KCAC and uses the variant EfficientNetV2S with an image size of 384x384. These architectures were trained on four public datasets, partitioning the data into 80% for training and 20% for validation. Each dataset was balanced, ensuring an equal number of images for each class. The accuracy scores were calculated by inferring from the 20% validation set, image by image, in order to obtain the final accuracy score. Since multiclass classification is not a regression problem, only accuracy was calculated. The training was stopped when the validation accuracy failed to increase after more than ten epochs. The architectures were trained on NVidia T4 (2560 cores) GPUs, except for the InceptionV3, which was trained using a Geforce GTX960 (1024 cores). The performance of different CNN architectures on various datasets with corresponding accuracies is reflected in Table and Fig. . Training time The training time for each architecture was determined based on the processing time per image trained on the Eyepacs dataset with an NVidia T4 GPU. The batch number, calculated as the number of steps per epoch divided by the time to train each epoch, provides the number of images processed per second. Architectures that require high memory will have a lower batch number, resulting in fewer processed images per second. Similarly, a deep architecture with a wide field of view will also process fewer images per second due to the higher number of architectural parameters. The most efficient training times were observed with the EfficientNet, RegNet, CoatNet, and InceptionV3 architectures. Quantization Quantization is a crucial process for deploying trained models onto small devices. It involves converting the floating point values in the model to 8-byte integers (uint8), resulting in a smaller model size and faster computation when used on Advanced Reduced Instruction Set Computing Machines (ARM) chips-based devices like many smartphones. Quantization requires determining the range of the input data so that the integers can be scaled correctly, which may involve clipping. EfficientNet, Beit, CotNet, and ResNeSt families were effectively quantized using the Keras TFLiteConverter. For example, an EfficientNet model trained on Eyepacs size was reduced from 245 meg to 80 meg, with the accuracy decreasing from 85% to 82% after quantization. Additionally, a quantized model for RegNet was trialed on an iPhone XR using a TensorFlow repository for real-time classification using the iPhone’s camera and could perform multiple classifications per second. The RegNet uint8 quantized model took 250 milliseconds per image inference, while a floating point version of the same model took 400 milliseconds on the iPhone. The InceptionV3 models were quantized by first converting the model variables to constants using TensorFlow’s “convert variables to constants”function. The quantized model could then be deployed on Android and iPhone devices using the TensorFlow Android repository and iOS repository . Heatmaps Heatmaps provide insights into classification decisions by highlighting important regions of an image that were influential in a specific diagnostic inference. They commonly employ the Grad-CAM technique , which utilizes the gradients of the target class, such as glaucoma, in a classification network and feeds them into the last CNN layer. This process generates a coarse localization map of the critical regions used to make the prediction, followed by backward propagation for reconstruction into a DeconvNet, which produces the final heatmap image. An EfficientNet model, trained with the ACRIMA glaucoma dataset, was utilized to create heatmaps using the Grad-CAM technique from the Keras repository . The resulting heatmap image (Fig. ) highlights the critical parts of the optic disc that the classifier identified as significant during the classification decision. Heatmaps were also created for InceptionV3-trained models using a Grad-CAM repository . Figure shows a heatmap generated using guided backpropagation . Due to the disappointing performance of transformers with small datasets, no attempt was made to generate heatmaps for transformers like ViT, even though they can be created using the same method . Discussion The performance metrics for various CNNs for the multiclass classification problem are summarized in Table . EfficientNet demonstrates strong performance across different datasets with a remarkable training speed of 31 images per second. On the other hand, Transformers exhibit lower accuracy and slow training time. Considering factors such as accuracy, training speed, the ability to quantize, and the generation of heatmaps, EfficientNet emerges as the top performer overall. Transformers tend to over-fit on smaller datasets , displaying poor performance even on the largest ophthalmic dataset, Eyepacs (containing over 25810 images). This suggests that more than tens of thousands of images are required to mitigate over-fitting. Transformers trained using the ImageNet dataset of over 14 million images , which may explain why they perform well on the ImageNet leaderboard. Additionally, architectures that excel with larger datasets also tend to perform well with smaller datasets. For instance, EfficientNet performs well with Eyepacs and Messidor, while the MLP family yields inferior results on the same datasets. Despite the small size of the datasets used (consensus is less than 4000 images ), it was observed that the accuracy was quite high, even though overfitting would typically be expected to affect accuracy. Overfitting in small datasets can be mitigated by techniques such as image augmentation, which involves rotating, flipping, and cropping images to make it more challenging for the classifier to memorize the training data and instead generalize. Image augmentation is commonly integrated into deep learning frameworks such as Keras (Keras ImageDataGenerator augments with parameters including shearrange, zoomrange). The use of pre-trained models is also beneficial for smaller datasets, as these models have been trained on millions of ImageNet images, and the learned filters can be reused for new images, including fundus images. Without pre-trained models, datasets would need to be much larger to train new filters. Additionally, a dropout layer can help reduce overfitting and is commonly included in CNN models such as EfficientNet. Dropout randomizes weights to varying degrees, reducing the likelihood of data memorization and encouraging generalization. Using smaller models also helps mitigate overfitting, as fewer parameters make it harder for the model to learn the training data . This may explain why EfficientNet, despite its small size (20.33 million parameters), performed well compared to VOLO, which has 296 million parameters. The poor performance of VOLO and DavitS may be attributed to overfitting, resulting in memorization of the training data rather than generalization. The high number of parameters also explains why transformers took longer to train compared to CNNs, with ViT and VOLO having the slowest training times. The underperformance of transformers with smaller datasets has been observed in studies such as those by Chen et al. and Zhu et al. . Zhu et al. argue that the VIT transformer’s lower performance on small datasets may be due to a “lack of inductive bias of locality with lower layers, where Vit cannot learn the local relations with a small amount of data.” This poor performance may not only be attributed to the larger parameter size but also to the transformer architecture itself. Further research into improving transformers’ ability to train on smaller datasets would be beneficial. Grading classifiers When labeling datasets based on grades rather than classes, like the four grades of DR in the Eyepacs dataset , a standard multiclass classifier is not suitable. The multiclass classifier requires modification to produce a single-grade output ranging from 0 to 1. For example, in the case of InceptionV3, the number of labels was reduced to one using the TensorFlow tf.reshape function in the last layer. The loss function was changed to Mean Squared Error (MSE) loss, replacing softmax since the grading does not use cross-entropy. Similarly, in the case of InceptionV3, the slim Euclidian loss (MSE) replaced the slim softmax loss in the model. This modification was also made for EfficientNet and RegNet models using the sigmoid activation. Zhang et al. took a different approach by using a deep graph correlation network (DGCN) consisting of multiple CNNs that are correlated through a graph. They claimed that the performance was close to that of specialists’ results. However, they did not compare the performance of a DGCN to that of a single modified CNN, so it is unclear whether it is superior to a single CNN. Datasets The datasets used to test these architectures included Eyepacs, which contained four grades of DR scaled between 0 and 1. For Messidor and Messidor-2 , a grade of 0 was assigned to healthy images and a grade of 1 was assigned to DR images. Architectures There are no examples of grading architectures in the papers with code . Therefore, architectures were selected based on the performance of previously examined ones. The chosen architectures include EfficientNet, RegNet, and InceptionV3. Each architecture was adjusted to use a single output with mean squared error (MSE) loss, instead of using softmax. We used 80% of the data for training and 20% for validation. Training accuracy alone cannot ensure better class prediction, so we need to calculate accuracy differently. A prediction is considered correct if the predicted value and the ground truth are both less than 0.5, or if both are over 0.5. Since grading is a regression problem, AUC, precision, and recall were also calculated, with 0.5 as the midpoint. Each modified CNN architecture utilized for grading underwent training on different datasets, and their performances are detailed in Table . The grading accuracies are also depicted in Fig. as a heat grid. Heatmaps, quantization, training time Heatmaps and quantization were performed similarly to multiclass classifiers due to the use of the same architectures as with classification (except for the last layer). It was assumed that the training time was the same because of the identical architectures being used. Discussion The performance metrics for various grading classifiers are presented in Table . Among the three datasets tested, RegNet demonstrated the best performance for grading applications. RegNet, Inceptionv3, and EfficientNet displayed similar capabilities for generating heatmaps and quantization, likely due to their use of the same architectures as in the multiclass classification problem. The AUC (Area Under the Curve) is a helpful metric when dealing with unbalanced data and was used to evaluate the performance of the grading classifiers. The AUC values showed a strong correlation with accuracy, with RegNet achieving the highest AUC values, similar to its accuracy performance. RegNet also demonstrated the highest precision, averaging 91% across the three datasets, while EfficientNet averaged 85% and InceptionV3 averaged 76%. In terms of recall, EfficientNet averaged 78%, compared to 76% for RegNet and 53% for InceptionV3. The fact that precision is higher than recall for the grading classifiers indicates that the models are better at predicting when a subject truly has a condition, but less effective at predicting when a patient does not have a condition. However, the threshold of 0.5 could be adjusted to balance recall and precision. RegNet’s superior performance in regression compared to other models was also noted by Maddury et al. . The study indicated that, across different regression problems, RegNet outperformed EfficientNet. However, the paper did not provide any explanations as to why RegNet may have outperformed other models in regression. RegNet incorporates a regulatory module that controls the flow of information between layers, preventing early block information from being forgotten in later blocks, whereas EfficientNet optimally scales depth and width. It is possible that RegNet’s regulatory module is better suited for regression tasks. The comparison showed that there are few disadvantages to using grading over multiclass classification, especially since the accuracy is similar (85% for EfficientNet multiclass Eyepacs versus 88% for grading). Additionally, grading had similar training times and the ability to generate heatmaps and freezing compared to multiclass. Moreover, grading offers the advantage of providing a probability of a condition instead of a discrete multiclass prediction, which may be more useful in a clinical situation. Classification OCT 2D images In the context of OCT 2D slices (as opposed to fundus camera images), the most effective architectures were studied using two publicly available OCT image datasets, OCT2017 and OCTID. TSuji et al. also demonstrated the effectiveness of training OCT data (compared to the fundus images) with CNNs for pathologies CNV, DME, and Drusen, achieving close to 100% accuracy. Datasets The dataset named OCT2017 contains 2D cross sections of sagittal slices of the retina. It includes four image classes: Choroidal Neovascularization (CNV) (37205 images), Diabetic Macular Edema (DME) (11348 images), drusen (8616 images), and healthy (26315 images). The CNV images display the neovascular membrane and associated subretinal fluid. Meanwhile, the images for DME images depict retinal thickening associated with intraretinal fluid, along with multiple drusen present in early AMD. The OCTID dataset consists of slices displaying various eye pathologies, including: normal (200 images), macular holes (100 images), macular degeneration, and retinopathy (100 images). The EIA2020 dataset includes 200 normal and 200 glaucomatous optic disc cube OCT volumes from 200 participants, with 100 of them diagnosed with glaucoma and the other 100 being normal controls. All 2D images from the 200 participants were categorized into glaucoma and non-glaucoma multiclass groups. This dataset comprises 93760 Enface slices and 40400 longitudinal cross-sectional slices of the optic nerve head. Architectures The leaderboard on papers with code for the dataset OCT2017 is publicly available. However, because the accuracy for each architecture is close to 100%, it’s challenging to determine which architectures performed the best. Therefore, the ones that showed the best performance for fundus images were chosen. These architectures include EfficientNet, RegNet, ResNeSt, Cotnet, and InceptionV3. Similar to the training of fundus images, 80% of the dataset’s image data is used for training and 20% for validation. Each architecture is trained using the relevant dataset, and accuracy is calculated in the same manner as for fundus images. The results are presented in Table and Fig. . Quantization, heatmaps and training time Because these architectures are the same as the fundus images, each architecture can be quantized, and heatmaps generated with the same technique as the fundus images. Training times were also calculated in the same way. Discussion According to the leaderboard on papers with code , determining the best-performing architecture for 2D OCT images is challenging because accuracies are close to 100% for OCT2017 and OCTID. For EIA2020, Enface showed the best performance with CotNet, while the accuracy was mixed for EIA2020. Midena suggests that the high accuracy observed when training on OCT datasets is because OCT images contain more information on eye structures compared to fundus images. The article describes how OCT images include eye structures that are not visible in fundus images. It might be suspected that the high accuracy is due to overfitting, but the accuracy is calculated using a separate validation set. The high accuracy of OCT images was observed for all models tested, indicating that OCT images are better for predicting pathology compared to fundus images. Classification of 3D OCT volumes When training with individual 2D OCT images using a 2D classifier, we achieved almost 100% accuracy with the datasets we used. However, training on an entire 3D OCT volume is expected to yield even better results. To accomplish this, we experimented with 3D CNNs and transformers. Dataset As with our previous 2D classifier, we utilized the EIA2020 dataset. However, this time, we employed the entire 3D volume of the Optic Disc Cube for classification. Figure shows a sample 3D OCT volume. Architectures For the targeted application, no papers with code 3D classifier architecture leaderboards were available. Hence, potential classifiers were tested from GitHub, which included three 3D CNNs and two transformers. The 3D CNN architecture for volumetric data was used, with voxels instead of 2D points, as specified by Ahmed et al. . The 3D CNNs used are less deep and wide compared to 2D CNNs due to memory constraints from the extra dimension of volumetric data. All architectures were trained using the EIA-2020 dataset, with the Optic Disc Cube in the Enface orientation; these OCT volumes were 128x128x64 for each patient. The CNN-3D-images-Tensorflow repository is similar to a 2D CNN but includes two Conv3D layers instead of multiple conv2d layers. It comprises a ReLU layer, followed by fully connected layers with two Conv3D layers (32, 64) and dropout. The 3D CNN in the Keras io repository is deeper than the previous architecture, with four Conv3D layers (64, 64, 128, 256) and dropout. The 3D-CNN-Keras repository has just one layer by default but was modified to have five layers, and it includes batch normalization. The Perceiver transformer was tested using the Keras perceiver code , which is designed to train on images with three channels (RGB). However, the three channels were replaced with a stack of 64 deep grayscale 128x128 OCT images, forming a volume of 128x128x64. The perceiver is a transformer, as opposed to a 3D CNN, and is capable of processing data in various formats, including audio, video, 3D volumes and images. It utilizes attention with keys and query sizes that are unrelated to the input size, allowing it to conserve memory as compared to traditional transformers for the same input size. The second transformer trialed was the ViT transformer , and implemented using vit-keras . It was originally designed for 2D classification. However, similar to the perceiver, it was modified to have a depth of 64 layers and trained with three channels to produce a volume of 128x128x64. As with 2D classification, 80% of the data is used for training and 20% for validation. Table displays the accuracy and classification time of each classifier on the trialed dataset. Figure depicts the heatgrid of different CNNs for classification. Heatmaps Similar to 2D classifiers, heatmaps can be generated when inference is performed on sample OCT volumes using Github code from Mehanna , which was modified to work in 3D on 3D-CNN-Keras . Similar to 2D heatmaps, the technique uses GradCAM. In Fig. , a sample glaucoma OCT volume from the EIA-2020 dataset, highlighting the area around the optic disc, is shown. Heatmaps were only generated for 3D-CNN-Keras. These steps can be applied to the other 3D CNN architectures as well. Training time Training time was calculated in the same way as for 2D images. It was estimated from the processing time per volume, with the batch number multiplied by the number of steps per epoch and then divided by the epoch time. Discussion It’s important to note that the CNN-3D architecture showed the highest accuracy, even outperforming keras io, ViT, and the perceiver models. When tested on the MosMed dataset , the CNN-3D architecture achieved an accuracy of 88%, while keras io scored 68%, ViT scored 48%, and the perceiver scored 55%. Despite achieving the highest accuracy, the CNN-3D architecture also had the slowest training speed, whereas the 3D-CNN-Keras model was the fastest. We found that the CNN-3D model performs better than the same slices trained in 2D. We organized the glaucoma slices from each patient of the EIA-2020 data into one group and the normal slices into another group. This resulted in two groups, each containing over 40000 images (49001, 44761). The two groups were trained using an InceptionV3 classifier. The CNN-3D model was 93% accurate, while the InceptionV3 model was 78% accurate. This demonstrates the advantage of training on an entire volume rather than individual slices. Quantizing 3D classifiers is impractical because performing inference on extensive 3D volumetric data, such as OCT scans, on a smartphone is not feasible due to hardware constraints. As a result, we did not attempt to quantize 3D classifiers, although it can be done in the same way as 2D classifiers. Also, due to the limitation of having only a single dataset (EIA-2020), we were unable to compare the performance of different 3D CNN architectures with datasets of different sizes. In the classification of fundus retinal images, two distinct types of classifiers hold prominence: the multiclass classifiers and the grading classifiers. These two classifiers are extensively expounded upon in the subsequent subsections, offering valuable insights into their diverse applications and significance in the field. Multiclass classifiers Datasets In order to determine the best-performing classification architectures for fundus images, we utilized four publicly available datasets containing pathologies such as diabetic retinopathy (DR) and glaucoma. Specifically, we employed the Eyepacs dataset , which includes retinal images categorized into four different grades of diabetic retinopathy. Grade 0 comprises 25810 images, grade 1 comprises 2443 images, grade 2 comprises 5292 images, and grade 3 comprises 873 images. The three grades were merged into a single DR class for the classification task, while healthy images were placed in the normal class. The Messidor dataset contains 1200 images related to diabetic retinopathy, consisting of 788 normal and 172 DR images. This dataset also includes an exclusive test set comprising 182 normal and 54 DR images. The Messidor-2 dataset is another collection of DR-related images, featuring 1368 normal images and 380 DR images. The ACRIMA dataset is a glaucoma dataset comprising 309 normal and 396 glaucomatous images. Architectures A promising architecture for ophthalmic tasks needs to consider several factors. These factors include the accuracy of the architecture when training on fundus datasets, the speed at which the model can be trained, the model’s ability to be trained on small datasets, the size of the model to fit on small ophthalmic imaging/triaging devices, and the ability to create heatmaps from the model. Different factors may hold varying levels of importance for specific ophthalmic tasks, but overall accuracy is generally considered the most important parameter. The architectures we examined were the best-performing ones described in Papers with Code . In cases where Papers with Code did not show fundus image datasets for a specific eye pathology, we selected architectures based on the best performers in the Papers with Code ImageNet leaderboard . The top performers included ViT, EfficientNet, VOLO, Beit, and RegNet. Additionally, we included InceptionV3 from Lily Peng’s 2016 paper for historical reasons, even though it did not rank as a top performer. All the architectures were pre-trained with the ImageNet dataset . Image augmentation techniques, such as modifying contrast, aspect ratio, flipping, and brightness, were also utilized during training to reduce overfitting. The architectures used in the study included mainly transformers, CNN, or a combination of both. Pure transformers such as the ViT classifier, a scaled vision transformer , , were utilized to extend NLP for images using patching to reduce the sequence size. This approach is known for achieving top performance on ImageNet. The GitHub vit-keras codebase was employed with an image size of 384x384. Additionally, the transformer VOLO was tested. VOLO uses fine-level features or tokens that are often overlooked in self-attention methods. The Keras CV Attention GitHub repository (KCAC) was used for the implementation, with the VOLOd5 variant and an image size of 224x224. BEIT , another transformer architecture, tokenizes the image and applies masks to patches. The study utilized the KCAC variant BeitBasePatch16 with an image size of 224. DaViT is a simple visual transformer that captures global context by leveraging self-attention mechanisms with both spatial tokens and channel tokens. In this study, the KCAC variant DaViTS was used, with an image size of 224x224. Unlike the previously described architectures, CotNet is a hybrid of transformer and CNN that utilizes convolutions and employs attention on 2D feature maps. It utilized the KCAC variant CotNetSE152D with an image size of 320. Another CNN/transformer hybrid, CoAtNet , utilizes depthwise convolution and self-attention. The KCAC variant CoAtNet0 was used with an image size of 224x224. ResNeSt Split-Attention Networks is another CNN/transformer that combines CNN with attention mechanisms. It features split attention, which enables cross-feature interactions. The variant used is ResNest269 with KCAC , and the image size is 416x416. MLP-Mixer has an unconventional architecture as it is neither a CNN nor a transformer. It is a multilayer perceptron with no CNNs, transformers, or attention mechanisms. The implementation uses the MLPMixerL16 variant with KCAC and an image size of 224x224. The rest of the architectures mentioned are CNNs. RegNet , which belongs to the ResNeSt family, is a CNN with shortcuts to prevent vanishing gradients. Unlike other models, RegNet features a regulator module for improved complementary features. The variant used is RegNetZ with KCAC , and the image size is 256x256. Additionally, Normalizer-Free ResNeSts is a CNN that eliminates batch normalization, which can be computationally expensive. It uses the NFNetF2 variant with KCAC and an image size of 352x352. InceptionV3 is a classic CNN that utilizes factorized convolutions, wherein multiple filters are applied simultaneously to a channel, and label smoothing, which compensates for errors in ground truths. The implementation was carried out using Tensorflow , with an image size of 299x299. EfficientNet is a CNN that determines the width, resolution, and depth of a CNN through ‘compound scaling’, adjusting the width and depth for a given resolution rather than doing so arbitrarily. It employs KCAC and uses the variant EfficientNetV2S with an image size of 384x384. These architectures were trained on four public datasets, partitioning the data into 80% for training and 20% for validation. Each dataset was balanced, ensuring an equal number of images for each class. The accuracy scores were calculated by inferring from the 20% validation set, image by image, in order to obtain the final accuracy score. Since multiclass classification is not a regression problem, only accuracy was calculated. The training was stopped when the validation accuracy failed to increase after more than ten epochs. The architectures were trained on NVidia T4 (2560 cores) GPUs, except for the InceptionV3, which was trained using a Geforce GTX960 (1024 cores). The performance of different CNN architectures on various datasets with corresponding accuracies is reflected in Table and Fig. . Training time The training time for each architecture was determined based on the processing time per image trained on the Eyepacs dataset with an NVidia T4 GPU. The batch number, calculated as the number of steps per epoch divided by the time to train each epoch, provides the number of images processed per second. Architectures that require high memory will have a lower batch number, resulting in fewer processed images per second. Similarly, a deep architecture with a wide field of view will also process fewer images per second due to the higher number of architectural parameters. The most efficient training times were observed with the EfficientNet, RegNet, CoatNet, and InceptionV3 architectures. Quantization Quantization is a crucial process for deploying trained models onto small devices. It involves converting the floating point values in the model to 8-byte integers (uint8), resulting in a smaller model size and faster computation when used on Advanced Reduced Instruction Set Computing Machines (ARM) chips-based devices like many smartphones. Quantization requires determining the range of the input data so that the integers can be scaled correctly, which may involve clipping. EfficientNet, Beit, CotNet, and ResNeSt families were effectively quantized using the Keras TFLiteConverter. For example, an EfficientNet model trained on Eyepacs size was reduced from 245 meg to 80 meg, with the accuracy decreasing from 85% to 82% after quantization. Additionally, a quantized model for RegNet was trialed on an iPhone XR using a TensorFlow repository for real-time classification using the iPhone’s camera and could perform multiple classifications per second. The RegNet uint8 quantized model took 250 milliseconds per image inference, while a floating point version of the same model took 400 milliseconds on the iPhone. The InceptionV3 models were quantized by first converting the model variables to constants using TensorFlow’s “convert variables to constants”function. The quantized model could then be deployed on Android and iPhone devices using the TensorFlow Android repository and iOS repository . Heatmaps Heatmaps provide insights into classification decisions by highlighting important regions of an image that were influential in a specific diagnostic inference. They commonly employ the Grad-CAM technique , which utilizes the gradients of the target class, such as glaucoma, in a classification network and feeds them into the last CNN layer. This process generates a coarse localization map of the critical regions used to make the prediction, followed by backward propagation for reconstruction into a DeconvNet, which produces the final heatmap image. An EfficientNet model, trained with the ACRIMA glaucoma dataset, was utilized to create heatmaps using the Grad-CAM technique from the Keras repository . The resulting heatmap image (Fig. ) highlights the critical parts of the optic disc that the classifier identified as significant during the classification decision. Heatmaps were also created for InceptionV3-trained models using a Grad-CAM repository . Figure shows a heatmap generated using guided backpropagation . Due to the disappointing performance of transformers with small datasets, no attempt was made to generate heatmaps for transformers like ViT, even though they can be created using the same method . Discussion The performance metrics for various CNNs for the multiclass classification problem are summarized in Table . EfficientNet demonstrates strong performance across different datasets with a remarkable training speed of 31 images per second. On the other hand, Transformers exhibit lower accuracy and slow training time. Considering factors such as accuracy, training speed, the ability to quantize, and the generation of heatmaps, EfficientNet emerges as the top performer overall. Transformers tend to over-fit on smaller datasets , displaying poor performance even on the largest ophthalmic dataset, Eyepacs (containing over 25810 images). This suggests that more than tens of thousands of images are required to mitigate over-fitting. Transformers trained using the ImageNet dataset of over 14 million images , which may explain why they perform well on the ImageNet leaderboard. Additionally, architectures that excel with larger datasets also tend to perform well with smaller datasets. For instance, EfficientNet performs well with Eyepacs and Messidor, while the MLP family yields inferior results on the same datasets. Despite the small size of the datasets used (consensus is less than 4000 images ), it was observed that the accuracy was quite high, even though overfitting would typically be expected to affect accuracy. Overfitting in small datasets can be mitigated by techniques such as image augmentation, which involves rotating, flipping, and cropping images to make it more challenging for the classifier to memorize the training data and instead generalize. Image augmentation is commonly integrated into deep learning frameworks such as Keras (Keras ImageDataGenerator augments with parameters including shearrange, zoomrange). The use of pre-trained models is also beneficial for smaller datasets, as these models have been trained on millions of ImageNet images, and the learned filters can be reused for new images, including fundus images. Without pre-trained models, datasets would need to be much larger to train new filters. Additionally, a dropout layer can help reduce overfitting and is commonly included in CNN models such as EfficientNet. Dropout randomizes weights to varying degrees, reducing the likelihood of data memorization and encouraging generalization. Using smaller models also helps mitigate overfitting, as fewer parameters make it harder for the model to learn the training data . This may explain why EfficientNet, despite its small size (20.33 million parameters), performed well compared to VOLO, which has 296 million parameters. The poor performance of VOLO and DavitS may be attributed to overfitting, resulting in memorization of the training data rather than generalization. The high number of parameters also explains why transformers took longer to train compared to CNNs, with ViT and VOLO having the slowest training times. The underperformance of transformers with smaller datasets has been observed in studies such as those by Chen et al. and Zhu et al. . Zhu et al. argue that the VIT transformer’s lower performance on small datasets may be due to a “lack of inductive bias of locality with lower layers, where Vit cannot learn the local relations with a small amount of data.” This poor performance may not only be attributed to the larger parameter size but also to the transformer architecture itself. Further research into improving transformers’ ability to train on smaller datasets would be beneficial. Grading classifiers When labeling datasets based on grades rather than classes, like the four grades of DR in the Eyepacs dataset , a standard multiclass classifier is not suitable. The multiclass classifier requires modification to produce a single-grade output ranging from 0 to 1. For example, in the case of InceptionV3, the number of labels was reduced to one using the TensorFlow tf.reshape function in the last layer. The loss function was changed to Mean Squared Error (MSE) loss, replacing softmax since the grading does not use cross-entropy. Similarly, in the case of InceptionV3, the slim Euclidian loss (MSE) replaced the slim softmax loss in the model. This modification was also made for EfficientNet and RegNet models using the sigmoid activation. Zhang et al. took a different approach by using a deep graph correlation network (DGCN) consisting of multiple CNNs that are correlated through a graph. They claimed that the performance was close to that of specialists’ results. However, they did not compare the performance of a DGCN to that of a single modified CNN, so it is unclear whether it is superior to a single CNN. Datasets The datasets used to test these architectures included Eyepacs, which contained four grades of DR scaled between 0 and 1. For Messidor and Messidor-2 , a grade of 0 was assigned to healthy images and a grade of 1 was assigned to DR images. Architectures There are no examples of grading architectures in the papers with code . Therefore, architectures were selected based on the performance of previously examined ones. The chosen architectures include EfficientNet, RegNet, and InceptionV3. Each architecture was adjusted to use a single output with mean squared error (MSE) loss, instead of using softmax. We used 80% of the data for training and 20% for validation. Training accuracy alone cannot ensure better class prediction, so we need to calculate accuracy differently. A prediction is considered correct if the predicted value and the ground truth are both less than 0.5, or if both are over 0.5. Since grading is a regression problem, AUC, precision, and recall were also calculated, with 0.5 as the midpoint. Each modified CNN architecture utilized for grading underwent training on different datasets, and their performances are detailed in Table . The grading accuracies are also depicted in Fig. as a heat grid. Heatmaps, quantization, training time Heatmaps and quantization were performed similarly to multiclass classifiers due to the use of the same architectures as with classification (except for the last layer). It was assumed that the training time was the same because of the identical architectures being used. Discussion The performance metrics for various grading classifiers are presented in Table . Among the three datasets tested, RegNet demonstrated the best performance for grading applications. RegNet, Inceptionv3, and EfficientNet displayed similar capabilities for generating heatmaps and quantization, likely due to their use of the same architectures as in the multiclass classification problem. The AUC (Area Under the Curve) is a helpful metric when dealing with unbalanced data and was used to evaluate the performance of the grading classifiers. The AUC values showed a strong correlation with accuracy, with RegNet achieving the highest AUC values, similar to its accuracy performance. RegNet also demonstrated the highest precision, averaging 91% across the three datasets, while EfficientNet averaged 85% and InceptionV3 averaged 76%. In terms of recall, EfficientNet averaged 78%, compared to 76% for RegNet and 53% for InceptionV3. The fact that precision is higher than recall for the grading classifiers indicates that the models are better at predicting when a subject truly has a condition, but less effective at predicting when a patient does not have a condition. However, the threshold of 0.5 could be adjusted to balance recall and precision. RegNet’s superior performance in regression compared to other models was also noted by Maddury et al. . The study indicated that, across different regression problems, RegNet outperformed EfficientNet. However, the paper did not provide any explanations as to why RegNet may have outperformed other models in regression. RegNet incorporates a regulatory module that controls the flow of information between layers, preventing early block information from being forgotten in later blocks, whereas EfficientNet optimally scales depth and width. It is possible that RegNet’s regulatory module is better suited for regression tasks. The comparison showed that there are few disadvantages to using grading over multiclass classification, especially since the accuracy is similar (85% for EfficientNet multiclass Eyepacs versus 88% for grading). Additionally, grading had similar training times and the ability to generate heatmaps and freezing compared to multiclass. Moreover, grading offers the advantage of providing a probability of a condition instead of a discrete multiclass prediction, which may be more useful in a clinical situation. Datasets In order to determine the best-performing classification architectures for fundus images, we utilized four publicly available datasets containing pathologies such as diabetic retinopathy (DR) and glaucoma. Specifically, we employed the Eyepacs dataset , which includes retinal images categorized into four different grades of diabetic retinopathy. Grade 0 comprises 25810 images, grade 1 comprises 2443 images, grade 2 comprises 5292 images, and grade 3 comprises 873 images. The three grades were merged into a single DR class for the classification task, while healthy images were placed in the normal class. The Messidor dataset contains 1200 images related to diabetic retinopathy, consisting of 788 normal and 172 DR images. This dataset also includes an exclusive test set comprising 182 normal and 54 DR images. The Messidor-2 dataset is another collection of DR-related images, featuring 1368 normal images and 380 DR images. The ACRIMA dataset is a glaucoma dataset comprising 309 normal and 396 glaucomatous images. Architectures A promising architecture for ophthalmic tasks needs to consider several factors. These factors include the accuracy of the architecture when training on fundus datasets, the speed at which the model can be trained, the model’s ability to be trained on small datasets, the size of the model to fit on small ophthalmic imaging/triaging devices, and the ability to create heatmaps from the model. Different factors may hold varying levels of importance for specific ophthalmic tasks, but overall accuracy is generally considered the most important parameter. The architectures we examined were the best-performing ones described in Papers with Code . In cases where Papers with Code did not show fundus image datasets for a specific eye pathology, we selected architectures based on the best performers in the Papers with Code ImageNet leaderboard . The top performers included ViT, EfficientNet, VOLO, Beit, and RegNet. Additionally, we included InceptionV3 from Lily Peng’s 2016 paper for historical reasons, even though it did not rank as a top performer. All the architectures were pre-trained with the ImageNet dataset . Image augmentation techniques, such as modifying contrast, aspect ratio, flipping, and brightness, were also utilized during training to reduce overfitting. The architectures used in the study included mainly transformers, CNN, or a combination of both. Pure transformers such as the ViT classifier, a scaled vision transformer , , were utilized to extend NLP for images using patching to reduce the sequence size. This approach is known for achieving top performance on ImageNet. The GitHub vit-keras codebase was employed with an image size of 384x384. Additionally, the transformer VOLO was tested. VOLO uses fine-level features or tokens that are often overlooked in self-attention methods. The Keras CV Attention GitHub repository (KCAC) was used for the implementation, with the VOLOd5 variant and an image size of 224x224. BEIT , another transformer architecture, tokenizes the image and applies masks to patches. The study utilized the KCAC variant BeitBasePatch16 with an image size of 224. DaViT is a simple visual transformer that captures global context by leveraging self-attention mechanisms with both spatial tokens and channel tokens. In this study, the KCAC variant DaViTS was used, with an image size of 224x224. Unlike the previously described architectures, CotNet is a hybrid of transformer and CNN that utilizes convolutions and employs attention on 2D feature maps. It utilized the KCAC variant CotNetSE152D with an image size of 320. Another CNN/transformer hybrid, CoAtNet , utilizes depthwise convolution and self-attention. The KCAC variant CoAtNet0 was used with an image size of 224x224. ResNeSt Split-Attention Networks is another CNN/transformer that combines CNN with attention mechanisms. It features split attention, which enables cross-feature interactions. The variant used is ResNest269 with KCAC , and the image size is 416x416. MLP-Mixer has an unconventional architecture as it is neither a CNN nor a transformer. It is a multilayer perceptron with no CNNs, transformers, or attention mechanisms. The implementation uses the MLPMixerL16 variant with KCAC and an image size of 224x224. The rest of the architectures mentioned are CNNs. RegNet , which belongs to the ResNeSt family, is a CNN with shortcuts to prevent vanishing gradients. Unlike other models, RegNet features a regulator module for improved complementary features. The variant used is RegNetZ with KCAC , and the image size is 256x256. Additionally, Normalizer-Free ResNeSts is a CNN that eliminates batch normalization, which can be computationally expensive. It uses the NFNetF2 variant with KCAC and an image size of 352x352. InceptionV3 is a classic CNN that utilizes factorized convolutions, wherein multiple filters are applied simultaneously to a channel, and label smoothing, which compensates for errors in ground truths. The implementation was carried out using Tensorflow , with an image size of 299x299. EfficientNet is a CNN that determines the width, resolution, and depth of a CNN through ‘compound scaling’, adjusting the width and depth for a given resolution rather than doing so arbitrarily. It employs KCAC and uses the variant EfficientNetV2S with an image size of 384x384. These architectures were trained on four public datasets, partitioning the data into 80% for training and 20% for validation. Each dataset was balanced, ensuring an equal number of images for each class. The accuracy scores were calculated by inferring from the 20% validation set, image by image, in order to obtain the final accuracy score. Since multiclass classification is not a regression problem, only accuracy was calculated. The training was stopped when the validation accuracy failed to increase after more than ten epochs. The architectures were trained on NVidia T4 (2560 cores) GPUs, except for the InceptionV3, which was trained using a Geforce GTX960 (1024 cores). The performance of different CNN architectures on various datasets with corresponding accuracies is reflected in Table and Fig. . Training time The training time for each architecture was determined based on the processing time per image trained on the Eyepacs dataset with an NVidia T4 GPU. The batch number, calculated as the number of steps per epoch divided by the time to train each epoch, provides the number of images processed per second. Architectures that require high memory will have a lower batch number, resulting in fewer processed images per second. Similarly, a deep architecture with a wide field of view will also process fewer images per second due to the higher number of architectural parameters. The most efficient training times were observed with the EfficientNet, RegNet, CoatNet, and InceptionV3 architectures. Quantization Quantization is a crucial process for deploying trained models onto small devices. It involves converting the floating point values in the model to 8-byte integers (uint8), resulting in a smaller model size and faster computation when used on Advanced Reduced Instruction Set Computing Machines (ARM) chips-based devices like many smartphones. Quantization requires determining the range of the input data so that the integers can be scaled correctly, which may involve clipping. EfficientNet, Beit, CotNet, and ResNeSt families were effectively quantized using the Keras TFLiteConverter. For example, an EfficientNet model trained on Eyepacs size was reduced from 245 meg to 80 meg, with the accuracy decreasing from 85% to 82% after quantization. Additionally, a quantized model for RegNet was trialed on an iPhone XR using a TensorFlow repository for real-time classification using the iPhone’s camera and could perform multiple classifications per second. The RegNet uint8 quantized model took 250 milliseconds per image inference, while a floating point version of the same model took 400 milliseconds on the iPhone. The InceptionV3 models were quantized by first converting the model variables to constants using TensorFlow’s “convert variables to constants”function. The quantized model could then be deployed on Android and iPhone devices using the TensorFlow Android repository and iOS repository . Heatmaps Heatmaps provide insights into classification decisions by highlighting important regions of an image that were influential in a specific diagnostic inference. They commonly employ the Grad-CAM technique , which utilizes the gradients of the target class, such as glaucoma, in a classification network and feeds them into the last CNN layer. This process generates a coarse localization map of the critical regions used to make the prediction, followed by backward propagation for reconstruction into a DeconvNet, which produces the final heatmap image. An EfficientNet model, trained with the ACRIMA glaucoma dataset, was utilized to create heatmaps using the Grad-CAM technique from the Keras repository . The resulting heatmap image (Fig. ) highlights the critical parts of the optic disc that the classifier identified as significant during the classification decision. Heatmaps were also created for InceptionV3-trained models using a Grad-CAM repository . Figure shows a heatmap generated using guided backpropagation . Due to the disappointing performance of transformers with small datasets, no attempt was made to generate heatmaps for transformers like ViT, even though they can be created using the same method . Discussion The performance metrics for various CNNs for the multiclass classification problem are summarized in Table . EfficientNet demonstrates strong performance across different datasets with a remarkable training speed of 31 images per second. On the other hand, Transformers exhibit lower accuracy and slow training time. Considering factors such as accuracy, training speed, the ability to quantize, and the generation of heatmaps, EfficientNet emerges as the top performer overall. Transformers tend to over-fit on smaller datasets , displaying poor performance even on the largest ophthalmic dataset, Eyepacs (containing over 25810 images). This suggests that more than tens of thousands of images are required to mitigate over-fitting. Transformers trained using the ImageNet dataset of over 14 million images , which may explain why they perform well on the ImageNet leaderboard. Additionally, architectures that excel with larger datasets also tend to perform well with smaller datasets. For instance, EfficientNet performs well with Eyepacs and Messidor, while the MLP family yields inferior results on the same datasets. Despite the small size of the datasets used (consensus is less than 4000 images ), it was observed that the accuracy was quite high, even though overfitting would typically be expected to affect accuracy. Overfitting in small datasets can be mitigated by techniques such as image augmentation, which involves rotating, flipping, and cropping images to make it more challenging for the classifier to memorize the training data and instead generalize. Image augmentation is commonly integrated into deep learning frameworks such as Keras (Keras ImageDataGenerator augments with parameters including shearrange, zoomrange). The use of pre-trained models is also beneficial for smaller datasets, as these models have been trained on millions of ImageNet images, and the learned filters can be reused for new images, including fundus images. Without pre-trained models, datasets would need to be much larger to train new filters. Additionally, a dropout layer can help reduce overfitting and is commonly included in CNN models such as EfficientNet. Dropout randomizes weights to varying degrees, reducing the likelihood of data memorization and encouraging generalization. Using smaller models also helps mitigate overfitting, as fewer parameters make it harder for the model to learn the training data . This may explain why EfficientNet, despite its small size (20.33 million parameters), performed well compared to VOLO, which has 296 million parameters. The poor performance of VOLO and DavitS may be attributed to overfitting, resulting in memorization of the training data rather than generalization. The high number of parameters also explains why transformers took longer to train compared to CNNs, with ViT and VOLO having the slowest training times. The underperformance of transformers with smaller datasets has been observed in studies such as those by Chen et al. and Zhu et al. . Zhu et al. argue that the VIT transformer’s lower performance on small datasets may be due to a “lack of inductive bias of locality with lower layers, where Vit cannot learn the local relations with a small amount of data.” This poor performance may not only be attributed to the larger parameter size but also to the transformer architecture itself. Further research into improving transformers’ ability to train on smaller datasets would be beneficial. In order to determine the best-performing classification architectures for fundus images, we utilized four publicly available datasets containing pathologies such as diabetic retinopathy (DR) and glaucoma. Specifically, we employed the Eyepacs dataset , which includes retinal images categorized into four different grades of diabetic retinopathy. Grade 0 comprises 25810 images, grade 1 comprises 2443 images, grade 2 comprises 5292 images, and grade 3 comprises 873 images. The three grades were merged into a single DR class for the classification task, while healthy images were placed in the normal class. The Messidor dataset contains 1200 images related to diabetic retinopathy, consisting of 788 normal and 172 DR images. This dataset also includes an exclusive test set comprising 182 normal and 54 DR images. The Messidor-2 dataset is another collection of DR-related images, featuring 1368 normal images and 380 DR images. The ACRIMA dataset is a glaucoma dataset comprising 309 normal and 396 glaucomatous images. A promising architecture for ophthalmic tasks needs to consider several factors. These factors include the accuracy of the architecture when training on fundus datasets, the speed at which the model can be trained, the model’s ability to be trained on small datasets, the size of the model to fit on small ophthalmic imaging/triaging devices, and the ability to create heatmaps from the model. Different factors may hold varying levels of importance for specific ophthalmic tasks, but overall accuracy is generally considered the most important parameter. The architectures we examined were the best-performing ones described in Papers with Code . In cases where Papers with Code did not show fundus image datasets for a specific eye pathology, we selected architectures based on the best performers in the Papers with Code ImageNet leaderboard . The top performers included ViT, EfficientNet, VOLO, Beit, and RegNet. Additionally, we included InceptionV3 from Lily Peng’s 2016 paper for historical reasons, even though it did not rank as a top performer. All the architectures were pre-trained with the ImageNet dataset . Image augmentation techniques, such as modifying contrast, aspect ratio, flipping, and brightness, were also utilized during training to reduce overfitting. The architectures used in the study included mainly transformers, CNN, or a combination of both. Pure transformers such as the ViT classifier, a scaled vision transformer , , were utilized to extend NLP for images using patching to reduce the sequence size. This approach is known for achieving top performance on ImageNet. The GitHub vit-keras codebase was employed with an image size of 384x384. Additionally, the transformer VOLO was tested. VOLO uses fine-level features or tokens that are often overlooked in self-attention methods. The Keras CV Attention GitHub repository (KCAC) was used for the implementation, with the VOLOd5 variant and an image size of 224x224. BEIT , another transformer architecture, tokenizes the image and applies masks to patches. The study utilized the KCAC variant BeitBasePatch16 with an image size of 224. DaViT is a simple visual transformer that captures global context by leveraging self-attention mechanisms with both spatial tokens and channel tokens. In this study, the KCAC variant DaViTS was used, with an image size of 224x224. Unlike the previously described architectures, CotNet is a hybrid of transformer and CNN that utilizes convolutions and employs attention on 2D feature maps. It utilized the KCAC variant CotNetSE152D with an image size of 320. Another CNN/transformer hybrid, CoAtNet , utilizes depthwise convolution and self-attention. The KCAC variant CoAtNet0 was used with an image size of 224x224. ResNeSt Split-Attention Networks is another CNN/transformer that combines CNN with attention mechanisms. It features split attention, which enables cross-feature interactions. The variant used is ResNest269 with KCAC , and the image size is 416x416. MLP-Mixer has an unconventional architecture as it is neither a CNN nor a transformer. It is a multilayer perceptron with no CNNs, transformers, or attention mechanisms. The implementation uses the MLPMixerL16 variant with KCAC and an image size of 224x224. The rest of the architectures mentioned are CNNs. RegNet , which belongs to the ResNeSt family, is a CNN with shortcuts to prevent vanishing gradients. Unlike other models, RegNet features a regulator module for improved complementary features. The variant used is RegNetZ with KCAC , and the image size is 256x256. Additionally, Normalizer-Free ResNeSts is a CNN that eliminates batch normalization, which can be computationally expensive. It uses the NFNetF2 variant with KCAC and an image size of 352x352. InceptionV3 is a classic CNN that utilizes factorized convolutions, wherein multiple filters are applied simultaneously to a channel, and label smoothing, which compensates for errors in ground truths. The implementation was carried out using Tensorflow , with an image size of 299x299. EfficientNet is a CNN that determines the width, resolution, and depth of a CNN through ‘compound scaling’, adjusting the width and depth for a given resolution rather than doing so arbitrarily. It employs KCAC and uses the variant EfficientNetV2S with an image size of 384x384. These architectures were trained on four public datasets, partitioning the data into 80% for training and 20% for validation. Each dataset was balanced, ensuring an equal number of images for each class. The accuracy scores were calculated by inferring from the 20% validation set, image by image, in order to obtain the final accuracy score. Since multiclass classification is not a regression problem, only accuracy was calculated. The training was stopped when the validation accuracy failed to increase after more than ten epochs. The architectures were trained on NVidia T4 (2560 cores) GPUs, except for the InceptionV3, which was trained using a Geforce GTX960 (1024 cores). The performance of different CNN architectures on various datasets with corresponding accuracies is reflected in Table and Fig. . The training time for each architecture was determined based on the processing time per image trained on the Eyepacs dataset with an NVidia T4 GPU. The batch number, calculated as the number of steps per epoch divided by the time to train each epoch, provides the number of images processed per second. Architectures that require high memory will have a lower batch number, resulting in fewer processed images per second. Similarly, a deep architecture with a wide field of view will also process fewer images per second due to the higher number of architectural parameters. The most efficient training times were observed with the EfficientNet, RegNet, CoatNet, and InceptionV3 architectures. Quantization is a crucial process for deploying trained models onto small devices. It involves converting the floating point values in the model to 8-byte integers (uint8), resulting in a smaller model size and faster computation when used on Advanced Reduced Instruction Set Computing Machines (ARM) chips-based devices like many smartphones. Quantization requires determining the range of the input data so that the integers can be scaled correctly, which may involve clipping. EfficientNet, Beit, CotNet, and ResNeSt families were effectively quantized using the Keras TFLiteConverter. For example, an EfficientNet model trained on Eyepacs size was reduced from 245 meg to 80 meg, with the accuracy decreasing from 85% to 82% after quantization. Additionally, a quantized model for RegNet was trialed on an iPhone XR using a TensorFlow repository for real-time classification using the iPhone’s camera and could perform multiple classifications per second. The RegNet uint8 quantized model took 250 milliseconds per image inference, while a floating point version of the same model took 400 milliseconds on the iPhone. The InceptionV3 models were quantized by first converting the model variables to constants using TensorFlow’s “convert variables to constants”function. The quantized model could then be deployed on Android and iPhone devices using the TensorFlow Android repository and iOS repository . Heatmaps provide insights into classification decisions by highlighting important regions of an image that were influential in a specific diagnostic inference. They commonly employ the Grad-CAM technique , which utilizes the gradients of the target class, such as glaucoma, in a classification network and feeds them into the last CNN layer. This process generates a coarse localization map of the critical regions used to make the prediction, followed by backward propagation for reconstruction into a DeconvNet, which produces the final heatmap image. An EfficientNet model, trained with the ACRIMA glaucoma dataset, was utilized to create heatmaps using the Grad-CAM technique from the Keras repository . The resulting heatmap image (Fig. ) highlights the critical parts of the optic disc that the classifier identified as significant during the classification decision. Heatmaps were also created for InceptionV3-trained models using a Grad-CAM repository . Figure shows a heatmap generated using guided backpropagation . Due to the disappointing performance of transformers with small datasets, no attempt was made to generate heatmaps for transformers like ViT, even though they can be created using the same method . The performance metrics for various CNNs for the multiclass classification problem are summarized in Table . EfficientNet demonstrates strong performance across different datasets with a remarkable training speed of 31 images per second. On the other hand, Transformers exhibit lower accuracy and slow training time. Considering factors such as accuracy, training speed, the ability to quantize, and the generation of heatmaps, EfficientNet emerges as the top performer overall. Transformers tend to over-fit on smaller datasets , displaying poor performance even on the largest ophthalmic dataset, Eyepacs (containing over 25810 images). This suggests that more than tens of thousands of images are required to mitigate over-fitting. Transformers trained using the ImageNet dataset of over 14 million images , which may explain why they perform well on the ImageNet leaderboard. Additionally, architectures that excel with larger datasets also tend to perform well with smaller datasets. For instance, EfficientNet performs well with Eyepacs and Messidor, while the MLP family yields inferior results on the same datasets. Despite the small size of the datasets used (consensus is less than 4000 images ), it was observed that the accuracy was quite high, even though overfitting would typically be expected to affect accuracy. Overfitting in small datasets can be mitigated by techniques such as image augmentation, which involves rotating, flipping, and cropping images to make it more challenging for the classifier to memorize the training data and instead generalize. Image augmentation is commonly integrated into deep learning frameworks such as Keras (Keras ImageDataGenerator augments with parameters including shearrange, zoomrange). The use of pre-trained models is also beneficial for smaller datasets, as these models have been trained on millions of ImageNet images, and the learned filters can be reused for new images, including fundus images. Without pre-trained models, datasets would need to be much larger to train new filters. Additionally, a dropout layer can help reduce overfitting and is commonly included in CNN models such as EfficientNet. Dropout randomizes weights to varying degrees, reducing the likelihood of data memorization and encouraging generalization. Using smaller models also helps mitigate overfitting, as fewer parameters make it harder for the model to learn the training data . This may explain why EfficientNet, despite its small size (20.33 million parameters), performed well compared to VOLO, which has 296 million parameters. The poor performance of VOLO and DavitS may be attributed to overfitting, resulting in memorization of the training data rather than generalization. The high number of parameters also explains why transformers took longer to train compared to CNNs, with ViT and VOLO having the slowest training times. The underperformance of transformers with smaller datasets has been observed in studies such as those by Chen et al. and Zhu et al. . Zhu et al. argue that the VIT transformer’s lower performance on small datasets may be due to a “lack of inductive bias of locality with lower layers, where Vit cannot learn the local relations with a small amount of data.” This poor performance may not only be attributed to the larger parameter size but also to the transformer architecture itself. Further research into improving transformers’ ability to train on smaller datasets would be beneficial. When labeling datasets based on grades rather than classes, like the four grades of DR in the Eyepacs dataset , a standard multiclass classifier is not suitable. The multiclass classifier requires modification to produce a single-grade output ranging from 0 to 1. For example, in the case of InceptionV3, the number of labels was reduced to one using the TensorFlow tf.reshape function in the last layer. The loss function was changed to Mean Squared Error (MSE) loss, replacing softmax since the grading does not use cross-entropy. Similarly, in the case of InceptionV3, the slim Euclidian loss (MSE) replaced the slim softmax loss in the model. This modification was also made for EfficientNet and RegNet models using the sigmoid activation. Zhang et al. took a different approach by using a deep graph correlation network (DGCN) consisting of multiple CNNs that are correlated through a graph. They claimed that the performance was close to that of specialists’ results. However, they did not compare the performance of a DGCN to that of a single modified CNN, so it is unclear whether it is superior to a single CNN. Datasets The datasets used to test these architectures included Eyepacs, which contained four grades of DR scaled between 0 and 1. For Messidor and Messidor-2 , a grade of 0 was assigned to healthy images and a grade of 1 was assigned to DR images. Architectures There are no examples of grading architectures in the papers with code . Therefore, architectures were selected based on the performance of previously examined ones. The chosen architectures include EfficientNet, RegNet, and InceptionV3. Each architecture was adjusted to use a single output with mean squared error (MSE) loss, instead of using softmax. We used 80% of the data for training and 20% for validation. Training accuracy alone cannot ensure better class prediction, so we need to calculate accuracy differently. A prediction is considered correct if the predicted value and the ground truth are both less than 0.5, or if both are over 0.5. Since grading is a regression problem, AUC, precision, and recall were also calculated, with 0.5 as the midpoint. Each modified CNN architecture utilized for grading underwent training on different datasets, and their performances are detailed in Table . The grading accuracies are also depicted in Fig. as a heat grid. Heatmaps, quantization, training time Heatmaps and quantization were performed similarly to multiclass classifiers due to the use of the same architectures as with classification (except for the last layer). It was assumed that the training time was the same because of the identical architectures being used. Discussion The performance metrics for various grading classifiers are presented in Table . Among the three datasets tested, RegNet demonstrated the best performance for grading applications. RegNet, Inceptionv3, and EfficientNet displayed similar capabilities for generating heatmaps and quantization, likely due to their use of the same architectures as in the multiclass classification problem. The AUC (Area Under the Curve) is a helpful metric when dealing with unbalanced data and was used to evaluate the performance of the grading classifiers. The AUC values showed a strong correlation with accuracy, with RegNet achieving the highest AUC values, similar to its accuracy performance. RegNet also demonstrated the highest precision, averaging 91% across the three datasets, while EfficientNet averaged 85% and InceptionV3 averaged 76%. In terms of recall, EfficientNet averaged 78%, compared to 76% for RegNet and 53% for InceptionV3. The fact that precision is higher than recall for the grading classifiers indicates that the models are better at predicting when a subject truly has a condition, but less effective at predicting when a patient does not have a condition. However, the threshold of 0.5 could be adjusted to balance recall and precision. RegNet’s superior performance in regression compared to other models was also noted by Maddury et al. . The study indicated that, across different regression problems, RegNet outperformed EfficientNet. However, the paper did not provide any explanations as to why RegNet may have outperformed other models in regression. RegNet incorporates a regulatory module that controls the flow of information between layers, preventing early block information from being forgotten in later blocks, whereas EfficientNet optimally scales depth and width. It is possible that RegNet’s regulatory module is better suited for regression tasks. The comparison showed that there are few disadvantages to using grading over multiclass classification, especially since the accuracy is similar (85% for EfficientNet multiclass Eyepacs versus 88% for grading). Additionally, grading had similar training times and the ability to generate heatmaps and freezing compared to multiclass. Moreover, grading offers the advantage of providing a probability of a condition instead of a discrete multiclass prediction, which may be more useful in a clinical situation. The datasets used to test these architectures included Eyepacs, which contained four grades of DR scaled between 0 and 1. For Messidor and Messidor-2 , a grade of 0 was assigned to healthy images and a grade of 1 was assigned to DR images. There are no examples of grading architectures in the papers with code . Therefore, architectures were selected based on the performance of previously examined ones. The chosen architectures include EfficientNet, RegNet, and InceptionV3. Each architecture was adjusted to use a single output with mean squared error (MSE) loss, instead of using softmax. We used 80% of the data for training and 20% for validation. Training accuracy alone cannot ensure better class prediction, so we need to calculate accuracy differently. A prediction is considered correct if the predicted value and the ground truth are both less than 0.5, or if both are over 0.5. Since grading is a regression problem, AUC, precision, and recall were also calculated, with 0.5 as the midpoint. Each modified CNN architecture utilized for grading underwent training on different datasets, and their performances are detailed in Table . The grading accuracies are also depicted in Fig. as a heat grid. Heatmaps and quantization were performed similarly to multiclass classifiers due to the use of the same architectures as with classification (except for the last layer). It was assumed that the training time was the same because of the identical architectures being used. The performance metrics for various grading classifiers are presented in Table . Among the three datasets tested, RegNet demonstrated the best performance for grading applications. RegNet, Inceptionv3, and EfficientNet displayed similar capabilities for generating heatmaps and quantization, likely due to their use of the same architectures as in the multiclass classification problem. The AUC (Area Under the Curve) is a helpful metric when dealing with unbalanced data and was used to evaluate the performance of the grading classifiers. The AUC values showed a strong correlation with accuracy, with RegNet achieving the highest AUC values, similar to its accuracy performance. RegNet also demonstrated the highest precision, averaging 91% across the three datasets, while EfficientNet averaged 85% and InceptionV3 averaged 76%. In terms of recall, EfficientNet averaged 78%, compared to 76% for RegNet and 53% for InceptionV3. The fact that precision is higher than recall for the grading classifiers indicates that the models are better at predicting when a subject truly has a condition, but less effective at predicting when a patient does not have a condition. However, the threshold of 0.5 could be adjusted to balance recall and precision. RegNet’s superior performance in regression compared to other models was also noted by Maddury et al. . The study indicated that, across different regression problems, RegNet outperformed EfficientNet. However, the paper did not provide any explanations as to why RegNet may have outperformed other models in regression. RegNet incorporates a regulatory module that controls the flow of information between layers, preventing early block information from being forgotten in later blocks, whereas EfficientNet optimally scales depth and width. It is possible that RegNet’s regulatory module is better suited for regression tasks. The comparison showed that there are few disadvantages to using grading over multiclass classification, especially since the accuracy is similar (85% for EfficientNet multiclass Eyepacs versus 88% for grading). Additionally, grading had similar training times and the ability to generate heatmaps and freezing compared to multiclass. Moreover, grading offers the advantage of providing a probability of a condition instead of a discrete multiclass prediction, which may be more useful in a clinical situation. In the context of OCT 2D slices (as opposed to fundus camera images), the most effective architectures were studied using two publicly available OCT image datasets, OCT2017 and OCTID. TSuji et al. also demonstrated the effectiveness of training OCT data (compared to the fundus images) with CNNs for pathologies CNV, DME, and Drusen, achieving close to 100% accuracy. Datasets The dataset named OCT2017 contains 2D cross sections of sagittal slices of the retina. It includes four image classes: Choroidal Neovascularization (CNV) (37205 images), Diabetic Macular Edema (DME) (11348 images), drusen (8616 images), and healthy (26315 images). The CNV images display the neovascular membrane and associated subretinal fluid. Meanwhile, the images for DME images depict retinal thickening associated with intraretinal fluid, along with multiple drusen present in early AMD. The OCTID dataset consists of slices displaying various eye pathologies, including: normal (200 images), macular holes (100 images), macular degeneration, and retinopathy (100 images). The EIA2020 dataset includes 200 normal and 200 glaucomatous optic disc cube OCT volumes from 200 participants, with 100 of them diagnosed with glaucoma and the other 100 being normal controls. All 2D images from the 200 participants were categorized into glaucoma and non-glaucoma multiclass groups. This dataset comprises 93760 Enface slices and 40400 longitudinal cross-sectional slices of the optic nerve head. Architectures The leaderboard on papers with code for the dataset OCT2017 is publicly available. However, because the accuracy for each architecture is close to 100%, it’s challenging to determine which architectures performed the best. Therefore, the ones that showed the best performance for fundus images were chosen. These architectures include EfficientNet, RegNet, ResNeSt, Cotnet, and InceptionV3. Similar to the training of fundus images, 80% of the dataset’s image data is used for training and 20% for validation. Each architecture is trained using the relevant dataset, and accuracy is calculated in the same manner as for fundus images. The results are presented in Table and Fig. . Quantization, heatmaps and training time Because these architectures are the same as the fundus images, each architecture can be quantized, and heatmaps generated with the same technique as the fundus images. Training times were also calculated in the same way. Discussion According to the leaderboard on papers with code , determining the best-performing architecture for 2D OCT images is challenging because accuracies are close to 100% for OCT2017 and OCTID. For EIA2020, Enface showed the best performance with CotNet, while the accuracy was mixed for EIA2020. Midena suggests that the high accuracy observed when training on OCT datasets is because OCT images contain more information on eye structures compared to fundus images. The article describes how OCT images include eye structures that are not visible in fundus images. It might be suspected that the high accuracy is due to overfitting, but the accuracy is calculated using a separate validation set. The high accuracy of OCT images was observed for all models tested, indicating that OCT images are better for predicting pathology compared to fundus images. The dataset named OCT2017 contains 2D cross sections of sagittal slices of the retina. It includes four image classes: Choroidal Neovascularization (CNV) (37205 images), Diabetic Macular Edema (DME) (11348 images), drusen (8616 images), and healthy (26315 images). The CNV images display the neovascular membrane and associated subretinal fluid. Meanwhile, the images for DME images depict retinal thickening associated with intraretinal fluid, along with multiple drusen present in early AMD. The OCTID dataset consists of slices displaying various eye pathologies, including: normal (200 images), macular holes (100 images), macular degeneration, and retinopathy (100 images). The EIA2020 dataset includes 200 normal and 200 glaucomatous optic disc cube OCT volumes from 200 participants, with 100 of them diagnosed with glaucoma and the other 100 being normal controls. All 2D images from the 200 participants were categorized into glaucoma and non-glaucoma multiclass groups. This dataset comprises 93760 Enface slices and 40400 longitudinal cross-sectional slices of the optic nerve head. The leaderboard on papers with code for the dataset OCT2017 is publicly available. However, because the accuracy for each architecture is close to 100%, it’s challenging to determine which architectures performed the best. Therefore, the ones that showed the best performance for fundus images were chosen. These architectures include EfficientNet, RegNet, ResNeSt, Cotnet, and InceptionV3. Similar to the training of fundus images, 80% of the dataset’s image data is used for training and 20% for validation. Each architecture is trained using the relevant dataset, and accuracy is calculated in the same manner as for fundus images. The results are presented in Table and Fig. . Because these architectures are the same as the fundus images, each architecture can be quantized, and heatmaps generated with the same technique as the fundus images. Training times were also calculated in the same way. According to the leaderboard on papers with code , determining the best-performing architecture for 2D OCT images is challenging because accuracies are close to 100% for OCT2017 and OCTID. For EIA2020, Enface showed the best performance with CotNet, while the accuracy was mixed for EIA2020. Midena suggests that the high accuracy observed when training on OCT datasets is because OCT images contain more information on eye structures compared to fundus images. The article describes how OCT images include eye structures that are not visible in fundus images. It might be suspected that the high accuracy is due to overfitting, but the accuracy is calculated using a separate validation set. The high accuracy of OCT images was observed for all models tested, indicating that OCT images are better for predicting pathology compared to fundus images. When training with individual 2D OCT images using a 2D classifier, we achieved almost 100% accuracy with the datasets we used. However, training on an entire 3D OCT volume is expected to yield even better results. To accomplish this, we experimented with 3D CNNs and transformers. Dataset As with our previous 2D classifier, we utilized the EIA2020 dataset. However, this time, we employed the entire 3D volume of the Optic Disc Cube for classification. Figure shows a sample 3D OCT volume. Architectures For the targeted application, no papers with code 3D classifier architecture leaderboards were available. Hence, potential classifiers were tested from GitHub, which included three 3D CNNs and two transformers. The 3D CNN architecture for volumetric data was used, with voxels instead of 2D points, as specified by Ahmed et al. . The 3D CNNs used are less deep and wide compared to 2D CNNs due to memory constraints from the extra dimension of volumetric data. All architectures were trained using the EIA-2020 dataset, with the Optic Disc Cube in the Enface orientation; these OCT volumes were 128x128x64 for each patient. The CNN-3D-images-Tensorflow repository is similar to a 2D CNN but includes two Conv3D layers instead of multiple conv2d layers. It comprises a ReLU layer, followed by fully connected layers with two Conv3D layers (32, 64) and dropout. The 3D CNN in the Keras io repository is deeper than the previous architecture, with four Conv3D layers (64, 64, 128, 256) and dropout. The 3D-CNN-Keras repository has just one layer by default but was modified to have five layers, and it includes batch normalization. The Perceiver transformer was tested using the Keras perceiver code , which is designed to train on images with three channels (RGB). However, the three channels were replaced with a stack of 64 deep grayscale 128x128 OCT images, forming a volume of 128x128x64. The perceiver is a transformer, as opposed to a 3D CNN, and is capable of processing data in various formats, including audio, video, 3D volumes and images. It utilizes attention with keys and query sizes that are unrelated to the input size, allowing it to conserve memory as compared to traditional transformers for the same input size. The second transformer trialed was the ViT transformer , and implemented using vit-keras . It was originally designed for 2D classification. However, similar to the perceiver, it was modified to have a depth of 64 layers and trained with three channels to produce a volume of 128x128x64. As with 2D classification, 80% of the data is used for training and 20% for validation. Table displays the accuracy and classification time of each classifier on the trialed dataset. Figure depicts the heatgrid of different CNNs for classification. Heatmaps Similar to 2D classifiers, heatmaps can be generated when inference is performed on sample OCT volumes using Github code from Mehanna , which was modified to work in 3D on 3D-CNN-Keras . Similar to 2D heatmaps, the technique uses GradCAM. In Fig. , a sample glaucoma OCT volume from the EIA-2020 dataset, highlighting the area around the optic disc, is shown. Heatmaps were only generated for 3D-CNN-Keras. These steps can be applied to the other 3D CNN architectures as well. Training time Training time was calculated in the same way as for 2D images. It was estimated from the processing time per volume, with the batch number multiplied by the number of steps per epoch and then divided by the epoch time. Discussion It’s important to note that the CNN-3D architecture showed the highest accuracy, even outperforming keras io, ViT, and the perceiver models. When tested on the MosMed dataset , the CNN-3D architecture achieved an accuracy of 88%, while keras io scored 68%, ViT scored 48%, and the perceiver scored 55%. Despite achieving the highest accuracy, the CNN-3D architecture also had the slowest training speed, whereas the 3D-CNN-Keras model was the fastest. We found that the CNN-3D model performs better than the same slices trained in 2D. We organized the glaucoma slices from each patient of the EIA-2020 data into one group and the normal slices into another group. This resulted in two groups, each containing over 40000 images (49001, 44761). The two groups were trained using an InceptionV3 classifier. The CNN-3D model was 93% accurate, while the InceptionV3 model was 78% accurate. This demonstrates the advantage of training on an entire volume rather than individual slices. Quantizing 3D classifiers is impractical because performing inference on extensive 3D volumetric data, such as OCT scans, on a smartphone is not feasible due to hardware constraints. As a result, we did not attempt to quantize 3D classifiers, although it can be done in the same way as 2D classifiers. Also, due to the limitation of having only a single dataset (EIA-2020), we were unable to compare the performance of different 3D CNN architectures with datasets of different sizes. As with our previous 2D classifier, we utilized the EIA2020 dataset. However, this time, we employed the entire 3D volume of the Optic Disc Cube for classification. Figure shows a sample 3D OCT volume. For the targeted application, no papers with code 3D classifier architecture leaderboards were available. Hence, potential classifiers were tested from GitHub, which included three 3D CNNs and two transformers. The 3D CNN architecture for volumetric data was used, with voxels instead of 2D points, as specified by Ahmed et al. . The 3D CNNs used are less deep and wide compared to 2D CNNs due to memory constraints from the extra dimension of volumetric data. All architectures were trained using the EIA-2020 dataset, with the Optic Disc Cube in the Enface orientation; these OCT volumes were 128x128x64 for each patient. The CNN-3D-images-Tensorflow repository is similar to a 2D CNN but includes two Conv3D layers instead of multiple conv2d layers. It comprises a ReLU layer, followed by fully connected layers with two Conv3D layers (32, 64) and dropout. The 3D CNN in the Keras io repository is deeper than the previous architecture, with four Conv3D layers (64, 64, 128, 256) and dropout. The 3D-CNN-Keras repository has just one layer by default but was modified to have five layers, and it includes batch normalization. The Perceiver transformer was tested using the Keras perceiver code , which is designed to train on images with three channels (RGB). However, the three channels were replaced with a stack of 64 deep grayscale 128x128 OCT images, forming a volume of 128x128x64. The perceiver is a transformer, as opposed to a 3D CNN, and is capable of processing data in various formats, including audio, video, 3D volumes and images. It utilizes attention with keys and query sizes that are unrelated to the input size, allowing it to conserve memory as compared to traditional transformers for the same input size. The second transformer trialed was the ViT transformer , and implemented using vit-keras . It was originally designed for 2D classification. However, similar to the perceiver, it was modified to have a depth of 64 layers and trained with three channels to produce a volume of 128x128x64. As with 2D classification, 80% of the data is used for training and 20% for validation. Table displays the accuracy and classification time of each classifier on the trialed dataset. Figure depicts the heatgrid of different CNNs for classification. Similar to 2D classifiers, heatmaps can be generated when inference is performed on sample OCT volumes using Github code from Mehanna , which was modified to work in 3D on 3D-CNN-Keras . Similar to 2D heatmaps, the technique uses GradCAM. In Fig. , a sample glaucoma OCT volume from the EIA-2020 dataset, highlighting the area around the optic disc, is shown. Heatmaps were only generated for 3D-CNN-Keras. These steps can be applied to the other 3D CNN architectures as well. Training time was calculated in the same way as for 2D images. It was estimated from the processing time per volume, with the batch number multiplied by the number of steps per epoch and then divided by the epoch time. It’s important to note that the CNN-3D architecture showed the highest accuracy, even outperforming keras io, ViT, and the perceiver models. When tested on the MosMed dataset , the CNN-3D architecture achieved an accuracy of 88%, while keras io scored 68%, ViT scored 48%, and the perceiver scored 55%. Despite achieving the highest accuracy, the CNN-3D architecture also had the slowest training speed, whereas the 3D-CNN-Keras model was the fastest. We found that the CNN-3D model performs better than the same slices trained in 2D. We organized the glaucoma slices from each patient of the EIA-2020 data into one group and the normal slices into another group. This resulted in two groups, each containing over 40000 images (49001, 44761). The two groups were trained using an InceptionV3 classifier. The CNN-3D model was 93% accurate, while the InceptionV3 model was 78% accurate. This demonstrates the advantage of training on an entire volume rather than individual slices. Quantizing 3D classifiers is impractical because performing inference on extensive 3D volumetric data, such as OCT scans, on a smartphone is not feasible due to hardware constraints. As a result, we did not attempt to quantize 3D classifiers, although it can be done in the same way as 2D classifiers. Also, due to the limitation of having only a single dataset (EIA-2020), we were unable to compare the performance of different 3D CNN architectures with datasets of different sizes. We experimented with several CNN architectures for various image types, such as transformers, transformer hybrids, and CNNs. Transformers, which have been widely discussed in architectures like ChatGPT, were included in our trials. After conducting our study, we determined the best architectures for different image modalities. We found that EfficientNet performed the best in terms of accuracy, training time, and its ability to work with smaller datasets for classifying color fundus and OCT images. For grading, we found that RegNet was the most effective, and for OCT 3D volumes, we found that 3D-CNN was the best performer, despite not being the fastest. While transformers have received significant attention recently , , our study found that they were outperformed by CNNs, which is consistent with prior research indicating that transformers rely on large datasets to achieve desirable performance and have a tendency to overfit on smaller datasets . These limitations favour their use by those who have access to big data and high computing power. The proposed study has some limitations. The method we are proposing uses models that rely on publicly available datasets for training, testing, and validation. However, publicly available healthcare datasets often have limitations, such as restricted clinical information. These datasets tend to focus on single diagnoses without providing broader comorbidity data. Additionally, disease status labels, indicating whether a person is positive or negative for a particular disease, may come from a single diagnostician, potentially introducing significant bias. Another issue with these datasets is that they often exhibit biases toward Western sources, which can be attributed to data availability, dominant platforms, and the prevalence of English-language content. Furthermore, datasets obtained from private healthcare providers may lead to an under-representation of patients with lower income and from ethnic minorities , . It is vital to have comprehensive whole-person clinical data to understand complex patient conditions. Simply having large datasets is not enough to address the issue of generalizability. It is important to also have diversity and cross-population validation to ensure that models can be used in real-world scenarios. Using multimodal approaches that combine different types of data (such as imaging, genomic, and clinical data) is likely to improve model performance and generalizability. Robust real-world testing and diverse datasets are necessary to ensure that AI systems are effective across various clinical settings and patient populations, addressing both technical and equitable healthcare challenges. This paper adds to our understanding of different AI approaches for ophthalmological applications. It compares the performance of various combinations of CNN architectures and image modalities, highlighting differences in their accuracy and ability to perform various machine-learning tasks. It emphasizes the importance of heatmaps in providing transparency into the decision-making process of CNNs by highlighting areas of interest in an image , . The study makes a significant contribution to the journey of AI development, providing detailed information for those involved in integrating these algorithms into medical devices , . However, there are still areas that require attention in future studies. One such area is the absence of 3D OCT datasets graded for glaucoma severity. Due to the difficulty in collecting data and the limited reliability of smaller datasets, there is a need for more specific research on hybrid architectures. These architectures could combine the strengths of transformers and CNNs, while also establishing continuous systems to self-monitor performance and refine new approaches for effectively handling smaller datasets. It is well established that many systemic diseases can be detected through observable changes in the retina . AI technology is at the forefront of using the eye to gain insights into overall health , . Therefore, future research has significant potential in examining retinal markers of systemic conditions using large cohort datasets containing extensive ophthalmic imaging and comprehensive longitudinal data on various comorbidities. |
Relationship Between Pregnant Women's eHealth Literacy and Their Attitudes Toward Sexuality | 53941516-2060-4d24-bc8c-b982c384e11c | 11870822 | Health Literacy[mh] | Introduction As the internet is becoming more and more prevalent today, and access to the internet is more convenient, various forms of information can be reached on the internet (Al‐Dahshan et al. ; Artieta‐Pinedo et al. ; Hadımlı et al. ). The function of the internet as a facilitator of access to information in the field of health has given rise to the concept of eHealth literacy. Researchers define eHealth literacy as the “capacity to search for and find information related to health on electronic sources”, understand it, evaluate it, and use it to make decisions about one's health (Norman and Skinner ). One of the groups of people who use the internet to access health‐related information and make decisions is pregnant women (Šoštarić and Jokić‐Begić ). The topics about which pregnant women prefer to reach information on the internet include pregnancy symptoms, fetal development, physical activity, pregnancy complications, childbirth, breastfeeding, and infant care. Sexuality is also among these topics (Dickerson ; Kamali et al. ). Supporting information Healthy sexuality during pregnancy strengthens the compatibility of the couple and their emotional connection and plays a role in the continuation of their relationship. In periods like pregnancy where changes in sexuality are experienced, eHealth literacy is highly important for a healthy sex life. eHealth literacy levels are very important for pregnant women to access information about sexuality, research such information, make effective decisions, and shape their attitudes toward sexuality. For this reason, in prenatal follow‐ups, pregnant women should be evaluated in terms of their eHealth literacy levels, and their eHealth literacy should be improved (Abiş and Kantaş Yılmaz ; Nawabi et al. ; Meldgaard et al. ). In the literature, there are studies that have examined the eHealth literacy levels of pregnant women (Villadsen et al. ; Xu et al. ; Yumei et al. ) or their attitudes toward sexuality (Adegboyega ; Igbana et al. ; Fatima et al. ). However, no study in which the relationship between the eHealth literacy levels of pregnant women and their attitudes toward sexuality was investigated could be encountered. To fill this gap in the literature, this study was planned to determine the relationship between the eHealth literacy of pregnant women and their attitudes toward sexuality. Materials and Methods The study was designed as a cross‐sectional study. 2.1 Participants The minimum required sample size for the study was calculated as 272 participants, based on a 95% confidence interval, a 90% testing power, and a 0.10 effect size (G*Power 3.1.9.2). The study was completed with 297 participants. The post hoc power analysis showed a power of 0.957. Pregnant women who were carrying singleton and healthy fetuses, did not have any communication problems, and had access to electronic sources were included in the study. Those who had become pregnant as a result of treatment and those who had been instructed to avoid sexual intercourse by their doctors due to risky pregnancies were excluded. 2.2 Measures The data were collected using a Personal Information Form, the eHealth Literacy Scale, and the Attitude Scale toward sexuality during pregnancy (AStSdP). The Personal Information Form consisted of questions about the sociodemographic (age, education status, partner's age, partner's education status, duration of marriage) and obstetric (number of pregnancies, number of living children) characteristics of the participants. The eHealth Literacy Scale (eHEALS), which was developed by Norman and Skinner , was evaluated in terms of its reliability and validity in the Turkish language by Tamer Gencer . This scale consists of two questions on internet usage that are not included in the scoring and eight items, which are rated based on a 5‐point Likert‐type scoring system. The scale does not have any inversely scored items or a cut‐off point. Possible scale scores vary between 8 and 40. Higher scores are considered to be indicative of greater eHealth literacy levels. The Cronbach's alpha coefficient of the scale was reported to be 0.915 in its Turkish validity and reliability study (Tamer Gencer ), while it was calculated as 0.951 in this study. The AStSdP was created by Sezer and Şentürk Erenel to investigate the sexuality‐related attitudes of pregnant women or men whose partners are pregnant. AStSdP includes 34 items and three subscales: the “anxiety about sexual intercourse during pregnancy” subscale with 9 items, the “dysfunctional beliefs and values about sexuality during pregnancy” subscale with 10 items, and the “approving sexuality during pregnancy” subscale with 15 items. It is a 5‐point Likert‐type measurement instrument, with minimum and maximum scores of 34 and 170. While greater AStSdP scores are accepted to correspond to more positive sexuality‐related attitudes during pregnancy, lower scores indicate more negative attitudes. The cut‐off point of the scale is 111.5, and scores greater than 111.5 are interpreted as having positive attitudes regarding sexuality during pregnancy. The Cronbach's alpha coefficients of the subscales of the scale were reported to vary in the range of 0.81–0.86, while the coefficient for the total scale was 0.90 (Sezer and Şentürk Erenel ). In this study, this coefficient was determined to be in the range of 0.77–0.86 for the dimensions and equal to 0.88 for the overall instrument. 2.3 Data Collection This study was carried out between June and August 2022 at the antenatal outpatient clinics of Necmettin Erbakan University Faculty of Medicine Hospital. Pregnant women who attended the antenatal outpatient clinics for routine follow‐ups and met the inclusion criteria were included in the sample. Women were included in the sample by nonprobability random sampling, and the data were collected face‐to‐face. It took about 15–20 min to collect data from each participant. 2.4 Ethical Considerations Ethics committee approval was obtained from the Ethics Committee of Necmettin Erbakan University (Approval no. 221), and written permission was obtained from the hospital where the study was conducted. The purpose of the study was explained to the participants, and verbal informed consent was obtained from the participants. 2.5 Statistical Analysis The statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) 25.0. The descriptive statistics of the collected data consisted of frequency, percentage, mean, standard deviation, median, minimum, and maximum values. Mean AStSdP scores were compared between two groups of participants using the independent‐samples t‐test or the Mann–Whitney U test and among three or more groups using one‐way analysis of variance (ANOVA) (post hoc Bonferroni multiple comparisons) or the Kruskal–Wallis H test (post hoc Mann–Whitney U test for pairwise comparisons). The relationships between variables were analyzed using Pearson's correlation analysis method, and the predictive relationships between variables were analyzed using the multiple linear regression analysis (backward) method. The results of all analyses were evaluated within a 95% confidence interval. Participants The minimum required sample size for the study was calculated as 272 participants, based on a 95% confidence interval, a 90% testing power, and a 0.10 effect size (G*Power 3.1.9.2). The study was completed with 297 participants. The post hoc power analysis showed a power of 0.957. Pregnant women who were carrying singleton and healthy fetuses, did not have any communication problems, and had access to electronic sources were included in the study. Those who had become pregnant as a result of treatment and those who had been instructed to avoid sexual intercourse by their doctors due to risky pregnancies were excluded. Measures The data were collected using a Personal Information Form, the eHealth Literacy Scale, and the Attitude Scale toward sexuality during pregnancy (AStSdP). The Personal Information Form consisted of questions about the sociodemographic (age, education status, partner's age, partner's education status, duration of marriage) and obstetric (number of pregnancies, number of living children) characteristics of the participants. The eHealth Literacy Scale (eHEALS), which was developed by Norman and Skinner , was evaluated in terms of its reliability and validity in the Turkish language by Tamer Gencer . This scale consists of two questions on internet usage that are not included in the scoring and eight items, which are rated based on a 5‐point Likert‐type scoring system. The scale does not have any inversely scored items or a cut‐off point. Possible scale scores vary between 8 and 40. Higher scores are considered to be indicative of greater eHealth literacy levels. The Cronbach's alpha coefficient of the scale was reported to be 0.915 in its Turkish validity and reliability study (Tamer Gencer ), while it was calculated as 0.951 in this study. The AStSdP was created by Sezer and Şentürk Erenel to investigate the sexuality‐related attitudes of pregnant women or men whose partners are pregnant. AStSdP includes 34 items and three subscales: the “anxiety about sexual intercourse during pregnancy” subscale with 9 items, the “dysfunctional beliefs and values about sexuality during pregnancy” subscale with 10 items, and the “approving sexuality during pregnancy” subscale with 15 items. It is a 5‐point Likert‐type measurement instrument, with minimum and maximum scores of 34 and 170. While greater AStSdP scores are accepted to correspond to more positive sexuality‐related attitudes during pregnancy, lower scores indicate more negative attitudes. The cut‐off point of the scale is 111.5, and scores greater than 111.5 are interpreted as having positive attitudes regarding sexuality during pregnancy. The Cronbach's alpha coefficients of the subscales of the scale were reported to vary in the range of 0.81–0.86, while the coefficient for the total scale was 0.90 (Sezer and Şentürk Erenel ). In this study, this coefficient was determined to be in the range of 0.77–0.86 for the dimensions and equal to 0.88 for the overall instrument. Data Collection This study was carried out between June and August 2022 at the antenatal outpatient clinics of Necmettin Erbakan University Faculty of Medicine Hospital. Pregnant women who attended the antenatal outpatient clinics for routine follow‐ups and met the inclusion criteria were included in the sample. Women were included in the sample by nonprobability random sampling, and the data were collected face‐to‐face. It took about 15–20 min to collect data from each participant. Ethical Considerations Ethics committee approval was obtained from the Ethics Committee of Necmettin Erbakan University (Approval no. 221), and written permission was obtained from the hospital where the study was conducted. The purpose of the study was explained to the participants, and verbal informed consent was obtained from the participants. Statistical Analysis The statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) 25.0. The descriptive statistics of the collected data consisted of frequency, percentage, mean, standard deviation, median, minimum, and maximum values. Mean AStSdP scores were compared between two groups of participants using the independent‐samples t‐test or the Mann–Whitney U test and among three or more groups using one‐way analysis of variance (ANOVA) (post hoc Bonferroni multiple comparisons) or the Kruskal–Wallis H test (post hoc Mann–Whitney U test for pairwise comparisons). The relationships between variables were analyzed using Pearson's correlation analysis method, and the predictive relationships between variables were analyzed using the multiple linear regression analysis (backward) method. The results of all analyses were evaluated within a 95% confidence interval. Results The distributions of the sociodemographic and obstetric characteristics of the participants are presented in Table . The participants had a mean total eHEALS score of 25.28 ± 7.15, while their mean total AStSdP score was 118.87 ± 16.73 (Table ). Considering the questions on eHEALS that are not included in the scoring, for the question about the contribution of the internet while making decisions about one's health, 10.8% of the participants responded as “not useful at all,” 10.1% responded as “not useful,” 32.3% responded as “no idea,” 44.8% responded as “useful,” and 2% responded as “very useful.” For the other question that is not included in the scoring, regarding the importance of accessing health‐related sources on the internet, 5.7% of the participants responded as “not important at all,” 14.5% responded as “not important,” 22.9% responded as “no idea,” 49.5% responded as “important,” and 7.4% responded as “very important.” There were weak, positive, and statistically significant relationships between the eHEALS scores of the participants and their total AStSdP scores, AStSdP “dysfunctional beliefs and values about sexuality during pregnancy” dimension scores, and AStSdP “approving sexuality during pregnancy” dimension scores ( p < 0.01, Table ). The eHEALS scores of the participants were very weakly related to their AStSdP “anxiety about sexual intercourse during pregnancy” dimension scores, but the relationship between these variables was not found to be significant ( p > 0.05, Table ). The independent variables that were identified to have significant effects in the univariate analyses were entered into the multiple regression analysis. It was determined that the AStSdP scores of the participants were influenced by 5 independent variables (age, age of partners, marriage duration, monthly income, and eHEALS score). The results of the multiple linear regression analysis (backward method) on the relationships between these independent variables and the dependent variable are presented in Table . According to the correlation analysis results and collinearity statistics of the independent variables among each other, there was no autocorrelation problem in the data. Among the variables that were included in the regression model, two independent variables were removed from the model (first age of the spouse and then marriage duration) as they were not found to predict AStSdP scores significantly ( p > 0.05, Table ). For the variables that remained in the model as a result, the order of significance based on their β coefficients indicating their prediction of AStSdP scores was (from the most significant to the least significant) as follows: eHEALS score, age, and monthly income ( p < 0.001). These three independent variables explained 12.7% of the total variance in the AStSdP scores of the participants. A one‐unit increase in the eHEALS scores of the participants corresponded to a 0.228‐unit increase in their AStSdP scores, a one‐unit increase in age corresponded to a 0.187‐unit increase in their AStSdP scores, and a one‐unit increase in their monthly income corresponded to a 0.183‐unit increase in their AStSdP scores (Table ). Discussion In this study, it was determined that the eHealth literacy levels of pregnant women were “high,” and their attitudes toward sexuality were “positive.” As the eHealth literacy levels of the participants increased, their positive attitudes toward sexuality during pregnancy also increased. eHealth literacy was identified as a significant positive predictor of the attitudes of pregnant women toward sexuality. Similarly, in other studies conducted in Turkey, the eHealth literacy levels of pregnant women have been identified to be “high” (Avçin and Can ; Baltacı et al. ; Şahin et al. ; Yılar Erkek and Öztürk Altınayak ). In a study carried out in the United Arab Emirates, it was found that 71.6% of pregnant women had sufficient levels of health literacy (Elbarazi et al. ). In a study in Iran, it was reported that pregnant women had “good” levels of eHealth literacy (Rahdar et al. ). The result of this study may be interpreted from two points of view. First, the fact that the women in the sample of this study were mostly young (41.8%: 17–25 years old, 45.8%: 26–35 years old) and had high levels of education (37.0%: high school, 20%: university or higher) may have resulted in their high levels of eHealth literacy. Other studies in the literature have also shown that women at younger ages and with higher education levels search information on the internet more frequently (Taştekin Ouyaba and İnfal Kesim ), and their eHealth literacy is high (Şahin et al. ). Second, it was reported that the COVID‐19 pandemic raised the eHealth literacy of individuals by limiting their social activities and interpersonal relationships (Liang et al. ; Liu et al. ). Considering the dates on which the data of this study were collected, the eHealth literacy of the participants may have increased as a consequence of the COVID‐19 pandemic and has remained high since then. The attitudes of the participants of this study toward sexuality during their pregnancy were determined as “positive.” The attitudes of pregnant women toward sexuality in Turkey have been reported to be “positive” in some studies (Alan Dikmen et al. ; Akın and Çelik ; Altınayak and Özkan ) and “negative” in some others (Güney and Bal ; Yılmaz Sezer et al. ; Yuvarlan and Beydağ ). Studies carried out in different cultures have, similarly, also demonstrated positive attitudes among pregnant women toward sexuality during pregnancy (Adegboyega ; Igbana et al. ; Fatima et al. ). In a qualitative study, while some pregnant women stated that physical changes affected their sexuality negatively, others said their sex drive increased (Leite et al. ). In another qualitative study, Ryan et al. reported that some pregnant women thought sexual activity during pregnancy could harm the baby and would be a sin. Other pregnant women in the same study had positive attitudes toward sexuality during pregnancy, thinking that sexual activity could help the labor process, sexual intercourse could be good for fetal health, and it is necessary for the continuation of their relationships with their partners. It is believed that the result we obtained in this study was associated with the adjustment of the participants to the changes occurring in their sexuality during pregnancy. On the other hand, it was stated in the literature that married couples could avoid sex during pregnancy because of their negative attitudes toward the physical and psychological changes occurring during pregnancy (Erbil ; Gümüşay et al. ). It was found in this study that as the eHealth literacy levels of the participants increased, their positive attitudes toward sexuality during pregnancy also increased. Additionally, the eHealth literacy variable was identified as a factor that significantly influenced the attitudes of the participants toward sexuality. Barikani et al. reported a weak positive relationship between the health literacy levels and sexual functions of women. Dehghankar et al. revealed sexual health literacy as one of the factors that affected the sexual functions of women. In the same study, it was shown that having excellent, adequate, and somewhat inadequate sexual health literacy levels affected sexual function 4.222, 2.219, and 1.313 times more, respectively, than having very inadequate literacy levels (Dehghankar et al. ). In another study, Panahi et al. listed sexual health literacy among the factors that affected the sexual quality of life of women. In their study, they reported 3.415, 2.304, and 1.412 times higher sexual quality of life, respectively, in those with excellent, adequate, and somewhat inadequate sexual health literacy compared to those with very inadequate literacy levels (Panahi et al. ). The results of this study were in agreement with those in the literature. This study is the first study to investigate the relationship between the eHealth literacy of pregnant women and their attitudes toward sexuality during pregnancy. For this reason, the strongest aspect of this study is its contribution to filling this gap in the literature. Nevertheless, the absence of another study on this particular topic may limit the comparability of the results and highlight the need for more studies in this field. Another important limitation of the study was that it was conducted at a single center. This limits the generalizability of its results. Finally, the data of the study were collected based on self‐reports. Because the Cronbach's alpha coefficients of the scales used to collect data were high (eHEALS: 0.951, AStSdP: 0.90), it was assumed that this limitation had been controlled for. Conclusion In this study, it was determined that the eHealth literacy levels of pregnant women were “high,” and their attitudes toward sexuality were “positive.” The eHealth literacy levels of the participants were identified as a significant and positive predictive factor of their attitudes toward sexuality during pregnancy. The constant advancements in technology and continuously increasing internet usage rates result in the higher abundance and complexity of information. To have the ability to protect and improve maternal and fetal health by making appropriate interventions, it is important for health professionals to know the information requirements and sources of pregnant women. eHealth literacy is an important factor for pregnant women to access information about sexuality, research such information, and make advisable decisions. Thus, healthcare professionals should evaluate the eHealth literacy levels of pregnant women and their attitudes toward sexuality during their prenatal follow‐up visits. Elif Çini : conceptualization, investigation, funding acquisition, writing–original draft, methodology, validation, visualization, writing–review and editing, software, formal analysis, project administration, data curation, supervision; resources. Hamide Aygör : conceptualization, investigation, writing‐original draft, methodology, validation, visualization, writing–review and editing, software, formal analysis, project administration, data curation, supervision, resources. The study was approved by Necmettin Erbakan University Health Sciences Scientific Research Ethics Committee (Approval no. 221), and necessary permissions were obtained from the hospital where the study was carried out. The authors declare no conflicts of interest. The peer review history for this article is available at https://publons.com/publon/10.1002/brb3.70390 . Supporting Information |
Teaching teen titans: An anatomy curriculum using superheroes for middle‐ and high school students in health professions outreach programs | bfa6d161-4c55-4151-bf04-56638d52a2a6 | 11612311 | Anatomy[mh] | Earlier exposure for students to science enhances curiosity, develops critical thinking, and stimulates overall development. Outreach programs administered by health professions schools are prime examples of ways to expose younger individuals to the fields of science, technology, engineering, and mathematics (STEM), as well as different careers within the biomedical sciences and health professions. Programs, such as those that bring high school students into a health professions school, have been shown to improve students' attitudes toward science. Further, summer programming for middle‐ and high school students can improve student knowledge of, interest in, and motivation for the fields of STEM. These outreach programs are especially important for recruiting students from diverse and disadvantaged backgrounds to health professions by supporting historically marginalized students to pursue STEM majors, leading to long‐term careers in healthcare professions. Outreach programs hosted by health profession schools can cover a wide range of topics, including the basic sciences, career exploration, and research skills. , Despite this variability, a common component of many health professions outreach programs is human anatomy. , , , Anatomy is an important discipline for students to explore, but it can be considered academically challenging and even dull, especially by younger learners. Mentally reconstructing or visualizing anatomical structures can be a barrier to a student's understanding. At the pre‐professional and professional education levels, various educational tools, ranging from plastic models to human‐body dissections, have successfully been implemented to teach anatomy. , , , , , , Even with these dynamic teaching methods, it can still be challenging for students to apply their knowledge when clinical experience and exposure are not readily available, which is especially true for students participating in a short outreach program. Creative teaching methods, especially those that help students relate to the material, can promote student engagement and success. Some common examples of these include incorporating music and field trips to help students assimilate and understand new information. , , Another element, employed in education across all ages, is the incorporation of popular culture into course content. Some examples have included using Harry Potter to teach elementary students about physical properties of substances, Avatar to discuss ecosystems with high school students, or even an activity based on the game Survivor for an undergraduate pre‐calculus class. In another case, an interdisciplinary science course called “Science and the Movies,” used major films such as Frankenstein and Blade Runner , in conjunction with readings for a class of nonmajors at a liberal arts college. Moreover, superheroes are another aspect of popular culture that can be incorporated into curricula used to teach biology, as well as anatomy and physiology. , , , Several studies have also championed the use of superheroes in various educational settings, with some benefits being the simplification of complex concepts for novice learners, the promotion of interest in the fields of STEM, the development of higher‐level critical thinking skills, and increased student enjoyment of the material. , In addition to using popular culture to teach in the classroom, it can also be used for various outreach sessions and events. One example is SciPop Talks! , which is an outreach program that aims to informally engage the public with scientists and use themes from pop culture, such as vampires, zombies, and Harry Potter to broaden the interest in science. This study aimed to evaluate the use of superheroes to teach anatomy to middle‐ and high school students as part of a two‐session anatomy curriculum within two health professions outreach programs. The primary hypothesis was that students would perceive the superhero curriculum as beneficial to their learning experience and preferential to traditional (i.e., lecture and laboratory experience without the use of superhero examples) teaching methods. The secondary hypothesis was that this engaging curriculum would supplement the overall goal of the health professions outreach program of increasing student interest in the fields of STEM and healthcare. All methods for this research study were approved by the Institutional Review Board at Rutgers, the State University of New Jersey. Rutgers New Jersey Medical School (NJMS) hosts a variety of summer programs for middle‐ and high school‐aged students. These programs are specifically designed to engage historically underrepresented students in healthcare. This study included students who were part of the Science Medicine And Related Topics (SMART) program and Summer Youth Scholars Program (SYSP) program in the summer of 2023. The SMART program is a summer program for students entering grades 7 through 12 that includes teaching students the basic concepts of human anatomy and physiology, as well as introducing them to various career options in the fields of science and medicine. For this program, the students apply to the program with consent from their parents or guardians, and they are accepted after their applications are reviewed. For summer 2023, 43 students were enrolled and all participated in this study. The SYSP program is a summer program for students entering grades 11 and 12 that includes preparing students for a college‐entrance standardized examination, as well as teaching students the basic concepts of human anatomy and physiology. For this program, students apply to the program with consent from their parents or guardians and are then interviewed and accepted. For summer 2023, 24 students were enrolled and all participated in this study. For this study, SMART and SYSP students participated in a new two‐session superhero‐themed anatomy outreach curriculum (outlined below). Each program participated in their own outreach session days with the same lesson plans for each program. This study utilized pre‐ and post‐session surveys, which were adapted from Grachan et al. , which explored the integration of superheroes into an undergraduate anatomy curriculum. The pre‐session survey was administered to all students prior to the lecture they received during the first session, while the post‐session survey was administered to all students after they completed the second session. These surveys contained multiple choice, Likert scale, and free‐text response questions. The quantitative survey responses were analyzed using Microsoft Excel. The free‐text responses were reviewed independently by two members of the study team to identify themes; these themes were compared and reviewed by a third member of the study team, who found them compatible over 95% of the time. Superhero anatomy outreach curriculum The first session focused on the musculoskeletal system, the cardiovascular system, and the respiratory system. The second session focused on the gastrointestinal system and the nervous system. Each session began with a 1‐h lecture conducted by a full‐time medical school anatomy faculty member that presented the foundational anatomy of each system and some of the physiological roles of major structures. These lectures also included discussions with the students on how organ systems could be represented in, and differ from, various superheroes based on topics discussed in Grachan and Quinn's suggested topics for integration. For example, when discussing the musculoskeletal system, characters with hyperflexibility were introduced and related to Ehlers‐Danlos syndrome. Superhero examples were deliberately chosen by the facilitators to be inclusive of diverse identities. Additionally, for each superhero discussed, a summary of their identities and powers was provided, including an explanation of their origin and how they relate to common pathologies. After the lecture, students participated in a 1‐h laboratory component of the curriculum that related to the body systems discussed. The laboratory component was designed to have stations related to each of the systems discussed in the lectures using prosected anatomical donors (i.e., cadavers), including some isolated organs. All students were required to have their parent/guardian review and sign a specific consent form for these sessions that informed both the student and their parent/guardian what the sessions would include and the lab policies. Students were encouraged to hold and feel structures being discussed based on their own comfort levels. For the first laboratory, the students were divided into groups of 6–10 students that rotated through three stations led either by a full‐time medical school anatomy faculty member, a postdoctoral fellow, or a medical student. The first station explored the knee joint and its associated structures. At this station, the facilitator discussed prosthetics through the example of superheroes with augmented limbs (e.g., Marvel's The Winter Soldier), as well as relating joints to how flexibility works. The second station demonstrated the major muscles/muscle groups in the human body and discussed how muscle attachments determine a muscle's action. For this station, the sartorius muscle and rectus femoris muscle were specifically highlighted to discuss how they perform the same action at the hip joint, but the opposite action at the knee joint. The third station focused on cardiopulmonary anatomy and included two isolated human heart specimens, one healthy heart and one with a pacemaker and coronary artery bypass grafts, as well as the lungs, respiratory diaphragm, and an isolated bronchial tree. The superhero correlations for this station included athleticism in superheroes, such as discussing left ventricular hypertrophy and lung capacity through the example of superheroes with super‐breath or the ability to hold their breath for extended times underwater. For the second laboratory, the students were divided into groups of 10–12 students that rotated through two stations led either by a full‐time medical school anatomy faculty member, a postdoctoral fellow, or a medical student. The first station discussed the nervous system through a laminectomy prosection that showed the spinal cord, as well as whole and hemisected human brains to highlight major lobes and structures of the brain that were discussed in the lecture. One example of a structure highlighted at this station was the pre‐frontal cortex and its role in executive functioning, which could be related to zombies or Marvel's The Incredible Hulk. The second station focused on the gastrointestinal tract as students were shown the structures food passes through from the esophagus through the distal colon. As the structures were highlighted, their major functions were discussed and related back to superheroes, such as how the stomach produces acid and could be related to an “acid spit” power or how the jejunum has many folds to increase surface area, which would be valuable to absorbing enough nutrients to maintain their powers. The first session focused on the musculoskeletal system, the cardiovascular system, and the respiratory system. The second session focused on the gastrointestinal system and the nervous system. Each session began with a 1‐h lecture conducted by a full‐time medical school anatomy faculty member that presented the foundational anatomy of each system and some of the physiological roles of major structures. These lectures also included discussions with the students on how organ systems could be represented in, and differ from, various superheroes based on topics discussed in Grachan and Quinn's suggested topics for integration. For example, when discussing the musculoskeletal system, characters with hyperflexibility were introduced and related to Ehlers‐Danlos syndrome. Superhero examples were deliberately chosen by the facilitators to be inclusive of diverse identities. Additionally, for each superhero discussed, a summary of their identities and powers was provided, including an explanation of their origin and how they relate to common pathologies. After the lecture, students participated in a 1‐h laboratory component of the curriculum that related to the body systems discussed. The laboratory component was designed to have stations related to each of the systems discussed in the lectures using prosected anatomical donors (i.e., cadavers), including some isolated organs. All students were required to have their parent/guardian review and sign a specific consent form for these sessions that informed both the student and their parent/guardian what the sessions would include and the lab policies. Students were encouraged to hold and feel structures being discussed based on their own comfort levels. For the first laboratory, the students were divided into groups of 6–10 students that rotated through three stations led either by a full‐time medical school anatomy faculty member, a postdoctoral fellow, or a medical student. The first station explored the knee joint and its associated structures. At this station, the facilitator discussed prosthetics through the example of superheroes with augmented limbs (e.g., Marvel's The Winter Soldier), as well as relating joints to how flexibility works. The second station demonstrated the major muscles/muscle groups in the human body and discussed how muscle attachments determine a muscle's action. For this station, the sartorius muscle and rectus femoris muscle were specifically highlighted to discuss how they perform the same action at the hip joint, but the opposite action at the knee joint. The third station focused on cardiopulmonary anatomy and included two isolated human heart specimens, one healthy heart and one with a pacemaker and coronary artery bypass grafts, as well as the lungs, respiratory diaphragm, and an isolated bronchial tree. The superhero correlations for this station included athleticism in superheroes, such as discussing left ventricular hypertrophy and lung capacity through the example of superheroes with super‐breath or the ability to hold their breath for extended times underwater. For the second laboratory, the students were divided into groups of 10–12 students that rotated through two stations led either by a full‐time medical school anatomy faculty member, a postdoctoral fellow, or a medical student. The first station discussed the nervous system through a laminectomy prosection that showed the spinal cord, as well as whole and hemisected human brains to highlight major lobes and structures of the brain that were discussed in the lecture. One example of a structure highlighted at this station was the pre‐frontal cortex and its role in executive functioning, which could be related to zombies or Marvel's The Incredible Hulk. The second station focused on the gastrointestinal tract as students were shown the structures food passes through from the esophagus through the distal colon. As the structures were highlighted, their major functions were discussed and related back to superheroes, such as how the stomach produces acid and could be related to an “acid spit” power or how the jejunum has many folds to increase surface area, which would be valuable to absorbing enough nutrients to maintain their powers. Participant demographics Between the two summer outreach programs, 67 students participated in the anatomy teaching sessions and 58 completed both the pre‐ and post‐session surveys. Demographic information about these participants is summarized in Table . Superhero background and preferences Prior to participating in the superhero anatomy curriculum, students were asked to report their existing interest and background knowledge in superheroes (Table ). When asked how much of a fan of superheroes students consider themselves on a 5‐point scale, the average group response was 3 ( SD = 0.90). When asked how knowledgeable students are about superheroes, the average group response was 3.1 ( SD = 0.79). Almost all students (57 students, 98.3%) reported having at least some background knowledge of superheroes. Students were asked to self‐select the different mediums their exposure to superheroes came from and they could choose all the mediums that applied to them. Each student's selection would provide insight into how they perceive superheroes across various media. Most students stated their background knowledge came from movies (50 students, 86.2%). Other common sources were cartoons or television shows (36 students, 62.1%), peer discussions (20 students, 34.5%), comic books or graphic novels (17 students, 29.3%), and news articles or social media (12 students, 20.7%). After participating in the superhero anatomy curriculum, 20 students (34.4%) strongly agreed and 32 students (55.2%) agreed that the use of superheroes and pop culture helped maintain their interest in the course material. Only two students (3.4%) strongly disagreed. In addition, 23 students (39.7%) strongly agreed and 30 students (51.7%) agreed that the use of superheroes and pop culture helped them gain a deeper understanding of the content. Again, only two students (3.4%) strongly disagreed. Almost all students (53 students, 91.4%) reported that the integration of superheroes in the anatomy curriculum improved their learning experience and most students (48 students, 82.8%) reported that the integration of non‐superhero pop culture characters would have improved their learning experience. Most students (51 students, 87.9%) reported that they prefer learning with superheroes to more traditional learning formats. Throughout the curriculum, a variety of major superheroes were introduced and discussed. This included introducing multiple superheroes with the same or similar powers and the faculty member who developed the curriculum was intentional in including superheroes to represent different genders, races, sexual orientations, and backstories/careers. After participating in the superhero anatomy curriculum, students were asked if they felt the diverse identities of superheroes described above helped them identify more with the class material. In regard to including superheroes of different genders, 25 students (43.1%) responded “yes,” that the inclusion of superheroes of different genders helped them identify more with the class material, while 33 students (56.9%) responded “no.” For superheroes of different races, 21 (36.2%) responded “yes” and 37 (63.8%) responded “no.” For superheroes of different sexual orientations, 22 (37.9%) responded “yes” and 36 (62.1%) responded “no.” Lastly, for superheroes with different backstories and careers, most responded “yes” (46 students, 79.3%), while 12 students (20.7%) responded “no.” Interest in STEM , healthcare, and anatomy In addition to exploring if the use of superheroes played a role in boosting students' interest with the class material, this study also explored if the anatomy outreach sessions affected the students' interest in STEM, healthcare, and anatomy (Table ). Prior to participating in the anatomy outreach sessions, 42 students (72.4%) were either interested or very interested in the fields of STEM. After participating in the anatomy outreach sessions, 46 students (79.3%) reported that they were more interested in the fields of STEM and 11 students (19.0%) reported that their level of interest was maintained. In regards to their interest in healthcare before the outreach sessions, 40 students (69.0%) were either interested or very interested. Most students (44 students, 75.9%) reported that they were more interested in the field of healthcare and 13 students (22.4%) reported that their level of interest was maintained after the sessions. Lastly, in regards to interest in the field of anatomy, 43 students (74.1%) were either interested or very interested prior to the outreach sessions. After the sessions, 49 students (84.5%) reported that they were more interested and nine students (15.5%) reported that their level of interest was maintained. Students were also asked after participating in the superhero‐themed anatomy laboratory outreach session if they had developed an interest in working in research. Most students (34 students, 58.6%) reported they developed an interest in working in research, while 24 students (41.4%) did not. For those that responded that they were interested in working in research, they were asked to list what field and why. The responses for those who answered this question can be found in Table . Some students put specific fields of medicine or research, while others were more general (e.g., “I want to be a doctor”) or noted that they were unsure of the specific field. Between the two summer outreach programs, 67 students participated in the anatomy teaching sessions and 58 completed both the pre‐ and post‐session surveys. Demographic information about these participants is summarized in Table . Prior to participating in the superhero anatomy curriculum, students were asked to report their existing interest and background knowledge in superheroes (Table ). When asked how much of a fan of superheroes students consider themselves on a 5‐point scale, the average group response was 3 ( SD = 0.90). When asked how knowledgeable students are about superheroes, the average group response was 3.1 ( SD = 0.79). Almost all students (57 students, 98.3%) reported having at least some background knowledge of superheroes. Students were asked to self‐select the different mediums their exposure to superheroes came from and they could choose all the mediums that applied to them. Each student's selection would provide insight into how they perceive superheroes across various media. Most students stated their background knowledge came from movies (50 students, 86.2%). Other common sources were cartoons or television shows (36 students, 62.1%), peer discussions (20 students, 34.5%), comic books or graphic novels (17 students, 29.3%), and news articles or social media (12 students, 20.7%). After participating in the superhero anatomy curriculum, 20 students (34.4%) strongly agreed and 32 students (55.2%) agreed that the use of superheroes and pop culture helped maintain their interest in the course material. Only two students (3.4%) strongly disagreed. In addition, 23 students (39.7%) strongly agreed and 30 students (51.7%) agreed that the use of superheroes and pop culture helped them gain a deeper understanding of the content. Again, only two students (3.4%) strongly disagreed. Almost all students (53 students, 91.4%) reported that the integration of superheroes in the anatomy curriculum improved their learning experience and most students (48 students, 82.8%) reported that the integration of non‐superhero pop culture characters would have improved their learning experience. Most students (51 students, 87.9%) reported that they prefer learning with superheroes to more traditional learning formats. Throughout the curriculum, a variety of major superheroes were introduced and discussed. This included introducing multiple superheroes with the same or similar powers and the faculty member who developed the curriculum was intentional in including superheroes to represent different genders, races, sexual orientations, and backstories/careers. After participating in the superhero anatomy curriculum, students were asked if they felt the diverse identities of superheroes described above helped them identify more with the class material. In regard to including superheroes of different genders, 25 students (43.1%) responded “yes,” that the inclusion of superheroes of different genders helped them identify more with the class material, while 33 students (56.9%) responded “no.” For superheroes of different races, 21 (36.2%) responded “yes” and 37 (63.8%) responded “no.” For superheroes of different sexual orientations, 22 (37.9%) responded “yes” and 36 (62.1%) responded “no.” Lastly, for superheroes with different backstories and careers, most responded “yes” (46 students, 79.3%), while 12 students (20.7%) responded “no.” STEM , healthcare, and anatomy In addition to exploring if the use of superheroes played a role in boosting students' interest with the class material, this study also explored if the anatomy outreach sessions affected the students' interest in STEM, healthcare, and anatomy (Table ). Prior to participating in the anatomy outreach sessions, 42 students (72.4%) were either interested or very interested in the fields of STEM. After participating in the anatomy outreach sessions, 46 students (79.3%) reported that they were more interested in the fields of STEM and 11 students (19.0%) reported that their level of interest was maintained. In regards to their interest in healthcare before the outreach sessions, 40 students (69.0%) were either interested or very interested. Most students (44 students, 75.9%) reported that they were more interested in the field of healthcare and 13 students (22.4%) reported that their level of interest was maintained after the sessions. Lastly, in regards to interest in the field of anatomy, 43 students (74.1%) were either interested or very interested prior to the outreach sessions. After the sessions, 49 students (84.5%) reported that they were more interested and nine students (15.5%) reported that their level of interest was maintained. Students were also asked after participating in the superhero‐themed anatomy laboratory outreach session if they had developed an interest in working in research. Most students (34 students, 58.6%) reported they developed an interest in working in research, while 24 students (41.4%) did not. For those that responded that they were interested in working in research, they were asked to list what field and why. The responses for those who answered this question can be found in Table . Some students put specific fields of medicine or research, while others were more general (e.g., “I want to be a doctor”) or noted that they were unsure of the specific field. The primary goal of this study was to evaluate the utility of using superheroes as a creative way to teach anatomy for high school and middle school students participating in a health professions outreach program. Anatomy is an important aspect of many outreach activities aimed at pre‐professional audiences. However, due to its scope and complexity, it can be daunting and even dull for students to approach. , The use of superheroes to teach anatomy to undergraduate students has been a successful and creative way to navigate this barrier and to improve the student learning experience. While the use of teaching with superheroes to children has not been explicitly explored, research has shown that educational fictional media, including scientists with superpowers teaching scientific concepts, can support children's science learning when done using a thoughtful approach. Overall, the findings from the current study demonstrated that younger students, notably high school and middle school students, also find the use of superheroes in anatomy education to be effective at improving their learning experience, maintaining their interest in the content, helping them gain a deeper understanding of the content and is a preferred approach over traditional teaching without the inclusion of superheroes. Interestingly, this perceived benefit of using superheroes in anatomy education was independent of the students' level of interest (i.e., how much of a fan of superheroes they are) and existing background knowledge about superheroes. While the examples in this curriculum were all superheroes, most students felt that the integration of any pop culture characters, including those from television, movies, video games, would improve their learning experience. Prior to participating in the anatomy sessions, most students were already interested in the fields of STEM, careers in healthcare, and anatomy. Given that this program is aimed at children interested in health professions, this is relatively unsurprising. Several studies have demonstrated that STEM‐focused outreach programs for students can not only inspire a passion for science but also increase existing interest in these fields. , , After participating in the anatomy sessions, a large majority of the group reported an increased interest in the fields of STEM, careers in healthcare, and anatomy despite already having a pronounced interest. These findings demonstrate that this superhero anatomy curriculum engaged students enough or engaged them in a novel way to deepen their enthusiasm for the field of STEM. The superheroes used to illustrate various concepts in anatomy and physiology were intentionally chosen to represent a diversity of gender identities, sexual orientations, races, ethnicities, and backgrounds or careers. Representation, or the inclusion of examples that relate to the personal identities of learners, has also been shown to improve the student educational experience across a variety of dimensions. For example, students participating in health professions outreach programs that are taught by faculty from racial or ethnic backgrounds traditionally underrepresented in healthcare have been shown to improve learner success for students from the same underrepresented backgrounds. , , Similarly, students who engage in experiences that allow them to visualize themselves in professional spaces through identity‐congruent role‐modeling are more committed to pursuing careers in healthcare. , , , In this study, a majority of students felt that the inclusion of superheroes with different careers or backstories was a positive part of the learning experience, supporting the importance of representation in education. While only a minority of students felt that the inclusion of superheroes with different racial, gender, and sexual orientation identities helped them relate better to the material, this could be a function of the diverse study population and still suggests that representation across these specific domains is important for some students. Limitations and future directions The main limitations of this study stemmed from the population of students participating in the curriculum—specifically, the students already enrolled in the pre‐existing summer outreach programs hosted by NJMS. The students in these programs self‐applied to participate in these programs and thus represent a more science‐focused population, which accounted for the initial high interest in the fields of STEM. On the other hand, these students come from a wide range of grade levels; while prior studies have shown that anatomy outreach programs can be comfortable for even middle school‐level students, the variability in educational backgrounds for the participants could have impacted their comparative perception of this activity. Future studies could examine the utility of superhero‐based anatomy and STEM education for a more general audience. In addition, other aspects of popular culture could be included in anatomy and STEM outreach programs. Other possible limitations of this study arose from the survey tool used. The survey questions focused on student perception of their own experience, and thus were useful to gauge how students felt about the utility of superheroes in anatomy education. However, the questions did not go further to capture changes in knowledge, and thus these results may not totally address the utility of this session in the context of these outreach programs. Future studies could examine if a superhero‐based anatomy curriculum impacts student performance on knowledge‐ or even application‐based post‐session quizzes. The survey also asked about their level of interest in STEM, health care, and anatomy, but it did not specifically ask if this was related to the inclusion of superheroes or the experience overall. While most of the students did note an increase in interest, this cannot be specifically correlated to a component of the sessions. Finally, a possible limitation for the implementation of this curriculum, and thus of the generalizability of this study, is the facilitators' and students' knowledge of superheroes. These data suggest that superhero‐based anatomy education is perceived positively regardless of superhero background, and this may simply be a function of the creativity in the teaching approach used by this activity. However, this also may have been a function of the facilitators comfort with presenting a complete description of each superhero's background, while other facilitators may not be able to do. Future studies could explore the use of other pop culture topics to improve the generalizability of this curriculum for facilitator comfort. The main limitations of this study stemmed from the population of students participating in the curriculum—specifically, the students already enrolled in the pre‐existing summer outreach programs hosted by NJMS. The students in these programs self‐applied to participate in these programs and thus represent a more science‐focused population, which accounted for the initial high interest in the fields of STEM. On the other hand, these students come from a wide range of grade levels; while prior studies have shown that anatomy outreach programs can be comfortable for even middle school‐level students, the variability in educational backgrounds for the participants could have impacted their comparative perception of this activity. Future studies could examine the utility of superhero‐based anatomy and STEM education for a more general audience. In addition, other aspects of popular culture could be included in anatomy and STEM outreach programs. Other possible limitations of this study arose from the survey tool used. The survey questions focused on student perception of their own experience, and thus were useful to gauge how students felt about the utility of superheroes in anatomy education. However, the questions did not go further to capture changes in knowledge, and thus these results may not totally address the utility of this session in the context of these outreach programs. Future studies could examine if a superhero‐based anatomy curriculum impacts student performance on knowledge‐ or even application‐based post‐session quizzes. The survey also asked about their level of interest in STEM, health care, and anatomy, but it did not specifically ask if this was related to the inclusion of superheroes or the experience overall. While most of the students did note an increase in interest, this cannot be specifically correlated to a component of the sessions. Finally, a possible limitation for the implementation of this curriculum, and thus of the generalizability of this study, is the facilitators' and students' knowledge of superheroes. These data suggest that superhero‐based anatomy education is perceived positively regardless of superhero background, and this may simply be a function of the creativity in the teaching approach used by this activity. However, this also may have been a function of the facilitators comfort with presenting a complete description of each superhero's background, while other facilitators may not be able to do. Future studies could explore the use of other pop culture topics to improve the generalizability of this curriculum for facilitator comfort. The integration of superheroes into an anatomy curriculum for the health professions outreach programs at NJMS was a useful way to maintain interest and help provide an avenue for a deeper understanding of anatomy content. Overall, the integration of superheroes into these outreach sessions improved the students' learning experience. In addition, the sessions promoted increased interest in the fields of STEM, healthcare, and anatomy, and research, which can possibly have a long‐term impact on increasing the interest and diversity of the field in the future. Rijul Asri: Conceptualization; data curation; formal analysis; investigation; methodology; writing – original draft; writing – review and editing. Humberto Baquerizo: Conceptualization; methodology; writing – original draft; writing – review and editing. Mercedes Padilla‐Register: Data curation; funding acquisition; project administration; writing – original draft; writing – review and editing. Maria Soto‐Greene: Funding acquisition; project administration; writing – original draft. Jeremy J. Grachan: Conceptualization; data curation; formal analysis; investigation; methodology; project administration; writing – original draft; writing – review and editing. Health Resources and Services Administration Hispanic Center of Excellence Grant D34HP49551 and the Victoria Foundation. All authors have no affiliations with or involvement in any organization or entity with any financial interest or non‐financial interest in the subject matter or materials discussed in this manuscript. All the data reported in this manuscript were ethically obtained after IRB approval. The outreach programs discussed in this manuscript were funded by the Health Resources and Services Administration Hispanic Center of Excellence Grant D34HP49551 and the Victoria Foundation. |
When | 42a9bdfd-48d5-45ff-a04a-ce2906b272a5 | 11863445 | Forensic Medicine[mh] | Escherichia coli ( E. coli ) is the one of prevalent gram-negative species. The following three broad categories of E. coli strains are of biological significance to mammals: commensal, intestinal pathogenic (InPEC), and extraintestinal pathogenic (ExPEC) . Although E. coli is a benign commensal colonizing the mammalian intestine, some strains or pathotypes can cause a variety of intestinal and diarrheal disorders . For example, a minimum of the following six pathotypes have been described: enterohemorrhagic, enteropathogenic, enterotoxigenic, enteroaggregative, diffusely adherent, and enteroinvasive E. coli , respectively . Moreover, ExPEC can cause diseases such as urinary tract infections, bacteremia, septicemia, and meningitis . It is unclear how E. coli genetic diversity, virulence, and antimicrobial resistance affect biodiversity and wild animal conservation . Wild animals may get exposed to antimicrobial compounds and antimicrobial resistance bacteria by interaction with anthropogenic sources such as human waste (garbage and sewage) and polluted waterways , livestock activities , or predation on impacted prey, including livestock corpses . Giraffes ( Giraffa camelopardalis ) are the tallest living animals and are kept in many zoos worldwide. Despite the passionate interest in keeping captive giraffes healthy, the health management of the giraffe presents a significant challenge. Despite being routinely bred in zoos, giraffes continue to provide a problem, particularly when it comes to food. Because of the high risk of maternal rejection and death among both mother-reared and hand-reared calves . Although success rates have increased over time, intensive care therapy of compromised calves remains under documented . There are still no definitive feeding standards, predicted weight increase, or suggestions for veterinarian assistance. In addition, little research has been conducted on diseases affecting giraffes, which are primarily associated with its hoofs and musculoskeletal system . However, there are few reports of E. coli disease in young giraffes. ExPEC infections are a serious threat to public health worldwide . Urinary tract infections, severe newborn meningitis, major intra-abdominal infections, and, less frequently, pneumonia, intravascular device infections, osteomyelitis, soft tissue infections, or bacteremia are the most troublesome illnesses. Bacteremia can result in sepsis, which is defined as life-threatening organ dysfunction caused by an unregulated immune response to infection . In this study, we describe the case of a giraffe that developed septicemia after an umbilical cord infection caused by E. coli. This case study may serve as a valuable reference and caution for veterinarians in zoos. Clinical history A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. Necropsy A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Histopathology Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Bacterial isolation and molecular identification Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. A female giraffe’s mother died of severe trauma approximately 5 h after delivery; hence, the juvenile giraffe could not feed colostrum and had to be artificially administered milk powder (Holstein milk + 10% colostrum). The juvenile giraffe was able to stand on its own 3 days after birth and was in a good condition. However, on the eight day after birth, the juvenile giraffe began to show clinical signs of losing appetite, slow walking, and depression. Lactasin (LactaidⓇ, Johnson & Johnson Inc., Guelp, Canada; Take 3 caplets with their bite of daily food.) was administered orally twice a day for 4 days during the course of the disease, and the treatment was ineffective. On the 12th day after birth, the juvenile giraffe showed anorexia, tarsal joint swelling of the right hind limb, claudication, unwillingness to move, the presence of a small amount of dirty yellow loose stool around the anus, and eventually lying down, and died on the 14th day after birth. A postmortem examination was performed within 2 h of the animal’s death. According to the naked eye observation, dark, red, and swollen umbilicus (Fig. A); and a small amount of dirty yellow sticky feces on the perianal coat. Serofibrinous arthritis and periarticular serous necrotizing inflammation: the swollen hock joint of the hind limb and the subcutaneous tissue near it was light yellow gelatinous material due to inflammatory edema, and the local skin is attached to the subcutaneous tissue and muscle (Fig. B). A cystic necrotic focus was formed at the adhesion site, with a red inflammatory response zone at the margin and yellow necrotic tissue in the central area. A large amount of pale yellow translucent inflammatory fluid and yellow flocculent fibrinous exudate accumulated in the joint cavity of the wrist, hock, and hip joints (Fig. C). Serous omphicitis with severe gelatinous swelling of the umbilical pore was obvious. The umbilical veins and bilateral umbilical arteries were thickened significantly, with black and red adventitia and gelatinous edema of the surrounding connective tissue. The umbilical arteries were full of dirty dark red necrosis, and the intima was rough (Fig. D). Severe serofibrinous pericarditis, pleuritis, and peritonitis: A large amount of pale-yellow translucent fluid and yellow white flocculent fibrinous exudates in the pericardial, chest, and abdominal cavities, and slight adhesion of the local serous membrane were observed (Fig. E and F). The kidneys and liver were swollen and dark red, with moist and glossy surfaces, and the submucosa of the renal pelvis was thickened and showed yellowish gelatinous edema. The lungs were enlarged, dark red in color, covered with flocculent fibrinous exudates, and the interstitium of the pulmonary lobule was generally widened and full of yellow translucent gelatinous exudate (Fig. A). The transverse diameter of the heart was significantly widened, and the epicardial membrane was attached to a flocculent yellowish-white fibrinous exudate. Hyperemia and edema of the abomasum mucosa and intestinal pneumatosis were observed. Serous interstitial pneumonia and lobular interstitial pneumonia were significantly widened and filled with homogeneous pink stained serous fluid (Fig. A). A small amount of fibrous protein, diffused neutrophils, scattered or clustered small blue bacilli, and a large number of neutrophils within lymphatic vessels at all levels were observed (Fig. B). Pulmonary hyperemia and sporadic serous fluid, erythrocytes, and neutrophils were found in the alveolar and bronchial lumens near the lobular interstitium (Fig. C and D). Serous necrotizing umbilical arteritis with hyperemia, edema, and marked thickening of the tunica adventitia of the umbilical artery filled with homogeneous pink serous fluid, scattered or diffused infiltrating neutrophils, and scattered or clustered small blue-stained bacilli were observed (Fig. E and F). Necrosis of the tunica intima and partial tunica media with diffused neutrophils and increased blue-stained bacterial clusters of varying sizes were observed; there was a large amount of serous fluid, necrotic neutrophils, and erythrocytes in the lumen of the artery (Fig. F). Mild hepatic sclerosis: hepatic interstitial connective tissue proliferated and widened mildly, with small bile duct increase; liver edema, obvious Disse space, incomplete wall of hepatic sinusoid, hemolysis, and hepatocytes separated from each other were seen. Mild steatosis and scattered necrosis of hepatocytes in the central area of the hepatic lobule were observed. Renal hyperemia and edema, mild to moderate cell swelling of the renal tubular epithelia, occasional necrosis of the renal tubular epithelia in some renal tubules, and increased neutrophil content in the pelvis were observed. Hyperemia and edema, loose capsules with scattered infiltrating neutrophils, and cells in the zona fasciculata separated from each other were observed in the adrenal glands. Lymphocyte reduction, fewer lymph nodules with inconspicuous germinal centers, and diffuse hemorrhage of the medulla were observed in the lymph nodes. Hyperemia and edema, significantly reduced lymphocytes, white pulp lymphocyte nodules with sparse lymphocytes of white pulp were observed in the spleen. Mild to moderate cellular swelling of cardiomyocytes was observed. Serous necrotizing enteritis: significant edema and thickening of the small intestine wall, large amount of serous fluid, diffuse infiltrating neutrophils, and necrotic mucosal layer were observed in the small intestine. The marginal acinar epithelial cells of the thyroid gland were partially necrotic. Blue-stained bacterial clusters of varying sizes or diffuse blue-stained small bacilli were present in the interstitium and serous membranes of most tissues and organs as well as in small blood vessels and lymphatic vessels (Fig. A). This was accompanied by scattered or diffuse infiltrating neutrophils, particularly in the lymphatic vessels of tissues filled with neutrophils (lymphatic spread). The endothelial cells separated severely from the media of the small vessels because of edema. Pleural fluid, pericardial exudate, ascites, joint fluid, lung, liver, and umbilical artery wall were aseptically collected with an inoculation loop and inoculated on MacConkey and eosin-methylene blue (EMB) medium and cultivated at 37 °C for 24 h. Many small pink colonies grew on the MacConkey medium. The EMB medium grew many small, round, shiny black colonies characteristic of E. coli . Using an inoculation loop, a small amount of the organism was collected to prepare a smear. Simple gram-negative small rods having the same morphology as that of E. coli were detected using Gram staining (Fig. B). In this study, the 16S rRNA of the cultured bacteria was sequenced. We selected ten colonies from each plate (total 70 colonies) for polymerase chain reaction (PCR) detection and sequencing. General primer sets (10Fx:5′-AGAGTTTGATCCTGGCTCAG-3′; 1509R:5′-GTTACCTTGTTACGACTTCAC-3′) were selected to amplify the 16S rRNA from all the colonies isolated from the baby giraffe samples . For amplification, the following conditions were used: initial denaturation at 95 °C for 3 min; 30 cycles of denaturation (30 s at 94 °C), annealing (30 s at 55 °C), extension (1.5 min at 72 °C), and final extension at 72 °C for 5 min. The amplified PCR products were analyzed on 1.5% agarose gels, purified, and sequenced. Through BLAST searches, the sequences were compared with those in the NCBI database. The results indicated that all the 70 colonies were of E. coli ; they also revealed a nucleotide sequence similarity of 99.16–99.79% to strains from human feces (CCFM8332), Yuncheng Salt Lake (YC-LK-LKJ9), poultry droppings (AKP_87), marine (CSR-33, CSR-59), wetland (CH-8), and wastewater treatment plant (WTPii241) (Fig. C). The phylogenetic groups of E. coli isolates were identified using a PCR-based method developed by Clermont et al. E. coli was classified into four main phylogenetic groups (A, B1, B2, and D) based on the presence of three markers (chuA, yjaA, and TSPE4.C2) in their DNA. Crude DNA was extracted from colonies by lysing them in sterile water at 100 °C for 15 min, followed by centrifugation. The lysis supernatant was utilized for the polymerase chain reaction, following the conditions outlined by Clermont et al. . The primers utilized in this investigation are detailed in Supplementary Table 1. PCR analysis of the isolate indicated its classification within phylogenetic group B1 (Fig. A). A total of twenty-five virulence genes were identified, including PAI, pap A, fm H, kps MT III, pap EF, ibe A, fyu A, bma E, sfa / foc DE, iut A, pap G allele III, hly A, rfc , nfa E, pap G allele I, kps MT II, pap C, gaf D, cva C, foc G, tra T, pap G allele I, pap G allele II, afa / dra BC, cnf 1, and sfas . Each virulence gene was amplified using specific primers in PCR. The primers utilized in this investigation are detailed in Supplementary Table 1. Thermal cycling conditions included an initial denaturation cycle at 94 °C for 2 min, followed by 35 cycles at 94 °C for 1 min, annealing at a specific temperature for 1 min, and extension at 72 °C for 1 min, with a final cycle at 72 °C for 2 min. In this strain, 6 virulence genes (PAI, iut A, pap G allele III, cva C, sfas , afa / dra BC) associated with adhesion, toxicity, and environmental response were identified (Fig. B). E. coli strains were tested for antibiotic susceptibility using CLSI guidelines and a disc diffusion method with 16 antibiotics . The resistance profiles of the E. coli strains to the antibiotics tested are outlined in Table , with interpretation of all susceptibility results based on the CLSI guidelines . The strains exhibited resistance to ceftazidime, ceftriaxone, ciprofloxacin, levofloxacin, amoxicillin, and azithromycin, while demonstrating susceptibility to penicillin, oxacillin, lincomycin, clindamycin, ampicillin, and cotrimoxazole. Among neonatal hand-reared giraffes, failure of passive transfer of immunity (FPI) continues to be a problem . The cotyledonary placentas in giraffes transfer negligible antibodies. Therefore, newborns rely on colostrum consumption and the absorption of maternal antibodies across the intestines during the first 24–48 h after birth . FPI increases the risk of diarrhea, enteritis, septicemia, arthritis, omphalitis, and pneumonia in domestic ungulates . Passive immunity transfer during the newborn’s first week is crucial for the successful rearing of ruminant neonates. To ensure optimal and steady growth, milk replacers must have a composition similar to that of giraffe milk. Bovine milk and colostrum have been effectively utilized and advised for hand-rearing giraffes despite the lower fat and protein contents of cow’s milk and milk substitutes than that of giraffe milk . Until the regular consumption of solid food, milk should be consumed daily in amounts of 7–10% of the body weight (19,000–25,000 kcal/day) . A hand-fed giraffe calf (which did not receive colostrum) died of septicemia caused by E. coli in the present study. Septic arthritis and phlegmon are caused by trauma or systemic infection. No trauma was recorded in this giraffe pup. Therefore, systemic infection may have contributed to the septic polyarthritis and/or phlegmon observed in this study. Enteritis, pneumonia, and funisitis are common sources of infection in giraffe calves; enteritis and pneumonia were not recorded in giraffe calves before the development of arthritis . Furthermore, the lack of immunocompetence might have put the calves at a risk of the infection spreading systemically through the umbilical cord. Septic polyarthritis and/or phlegmon may be caused by systemic infection. A PCR and sequence analysis confirmed that E. coli was the cause of bacteremia in the present case. E. coli colonizes newborn pups’ gastrointestinal tract shortly after birth and typically coexists with its host without causing disease. However, certain strains with specific virulence attributes can cause a range of illnesses in immunocompromised hosts or when gastrointestinal barriers are compromised. Extraintestinal pathogenic E. coli (ExPEC) are characterized primarily by their site of isolation, with the most clinically significant groups being uropathogenic E. coli (UPEC), neonatal meningitis-associated E. coli (NMEC), avian pathogenic E. coli (APEC), and septicemic E. coli (SEPEC) . ExPEC strains have the ability to cause infections in various extraintestinal locations. In the present case, the ExPEC strain resulted in pneumonia, umbilical arteritis, hepatitis, nephritis, hemorrhagic lymphadenitis, necrotizing enteritis, and necrotizing thyroiditis in the baby giraffe. There is no doubt that this is a direct result of E. coli bacteremia. In order to initiate bacteremia, the ExPEC strain must successfully infiltrate initial sites of infection or colonization, disseminate throughout the bloodstream, and persist within the blood. Nevertheless, the ExPEC strain has the capability to access the bloodstream through various pathways. Bacteremia lacking a discernible origin is classified as primary, while secondary bacteremia may result from dissemination originating from an existing infection, such as pneumonia or urinary tract infections, or from contaminated medical equipment . In this case, however, the bacteremia was likely a result of an umbilical cord infection. Improper handling of the umbilical cord presents a potential risk of infection, as it serves as a significant entry point for pathogens in newborns. Therefore, it is strongly advised that veterinarians adhere to proper disinfection, sterilization, isolation, and other cleaning protocols to ensure optimal umbilical cord hygiene when handling neonates. ExPEC uses various factors to cause disease in animals, including adhesins, invasins, protectins, iron acquisition systems, and toxins . These factors help ExPEC adhere, invade, evade the immune system, colonize, proliferate, and spread throughout the body, leading to infection in animals . Other bacterial factors such as secretion systems, quorum sensing systems, transcriptional regulators, and two-component systems also play a role in ExPEC pathogenesis . In this study, the virulotyping revealed that the E. coli strain was positive for PAI, iut A, pap G allele III, cva C, sfa s, and afa / dra BC. Adhesins are bacterial components that help them stick to other cells or surfaces, increasing their virulence. Specific adhesins are adapted to colonize different environments. Virulence genes linked to adhesion include pap G allele III, sfas , and afa / dra BC. Iron is a crucial micronutrient necessary for the growth and proliferation of bacteria within the host following successful colonization and/or invasion. Among the most significant virulence plasmids associated with ExPEC virulence are ColV and ColBM, particularly those containing the aerobactin operon ( iut A/ iuc ABCD). This operon codes for high-affinity iron-transport systems that enable bacteria to acquire iron in low-iron environments, such as those found in host fluids and tissues. Our isolates carrying virulence genes were found to possess the iut A gene, which facilitates survival in low iron conditions. Antibiotics are commonly utilized for the prevention and treatment of ExPEC infections. However, the widespread use of antibiotics has been linked to the development of multidrug-resistant bacteria. The high levels of antibiotic resistance observed in ExPEC strains present a significant risk to human health, as antibiotic-resistant bacteria and genes can be transmitted through the food chain. Previous research has shown that ExPEC isolates exhibit resistance to multiple antibiotics , underscoring the importance of conducting antibiotic susceptibility testing to identify the most effective treatment option. In this particular instance, the E. coli strain exhibited broad-spectrum beta-lactamase production. β-Lactam antibiotics, particularly 3rd generation cephalosporins, are commonly prescribed for the treatment of serious community-onset or hospital-acquired infections caused by E. coli . Regrettably, β-lactamase production in E. coli continues to be a significant factor in the development of resistance to β-lactam antibiotics . β-lactamases are bacterial enzymes that render β-lactam antibiotics ineffective through hydrolysis. This study presents findings on septic polyarthritis and/or septicemia in juvenile giraffes, potentially attributed to insufficient colostrum intake and E. coli infection via the umbilical cord. Furthermore, the study elucidates the diverse array of virulence factors exhibited by the E. coli strain and underscores the pathogenic significance of these pathogens in animal health. Continued research is warranted to identify additional virulence factors and elucidate the pathogenic mechanisms, ultimately aiding in the development of an effective diagnosis and treatment strategy for managing giraffe colibacillosis. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. |
Storylines of family medicine V: ways of thinking—honing the therapeutic self | 8e2b2e56-21b3-4f42-ab91-b97b6954537b | 11029209 | Family Medicine[mh] | Family physicians can use the perspectives they bring to their encounters with patients as therapeutic tools applied in service of improving patients’ health and well-being. To accomplish this health-promoting ability, three tasks are essential: (1) appreciating the importance of compassion and humanism in the practice of medicine, (2) recognising and observing the nature of clinical encounters as relational experiences between physicians and patients and (3) reflecting on (1) and (2), not simply to catalogue interesting interactions but to improve one’s therapeutic repertoire through ongoing attention and thoughtful contemplation. Johanna Shapiro and Cindy Haq The art of reflection in action—attending simultaneously to the emotions and processes of care while managing the content and timing of the clinical encounter—is an essential component of successful family medicine. ‘I can’t breathe!’ said Sandy, a 56-year-old woman who came to our after-hours clinic during the pandemic lockdown. A patient with asthma, Sandy had recently been laid off from her job as a server. She was hungry, frightened and feared eviction. Despite normal respirations and blood oxygen, and no wheezing, this urgent visit was a pleading call for help from a patient who was suffering. Reflection in action refers to continual self-awareness and other-awareness while in the midst of ongoing practice. For busy family physicians, the capacity to develop this moment-by-moment awareness is crucial. The goal of reflection is to modify and refine behaviour in real time to improve clinical outcomes, build the patient–doctor relationship, and enhance physician well-being and joy in practice. Thus, reflection in action results in continual refinement and adjustment of attitudes and behaviours, tailoring these to the clinical situation as it emerges from moment to moment. To use the word of the Brazilian educator and philosopher Paolo Freire, what results is ‘praxis’—action informed by reflection rather than automaticity. Family physicians can enhance their reflection-in-action skills by practising the following : Set an intention to care for patients as if they are the most important person in the world. Maintain awareness of the patient’s affect, words and non-verbal communications throughout the visit. Note personal thoughts, emotional reactions and physical responses. Check for assumptions, biases and premature closures, even when feeling pressured to conclude visits and move on to other demands. Manage the flow of the visit to elicit patients’ concerns and perspectives, collect essential information, conduct appropriate examinations, manage time and attend to other essential tasks. Assess, discern, negotiate and explain what you recommend for patients, noting context, resources and circumstances beyond the patient’s or your control. In learning how to reflect while doing, it is important to recognise that the emotions patients and physicians experience can both enhance or detract from clinical encounters. The goal is not to ignore the patients’ emotions nor suppress one’s own feelings. Rather, by recognising, acknowledging and being curious about the emotional currents in the exam room—all the while not identifying too strongly with them—physicians can soften and settle negative emotions to open space and promote expressions of empathy and compassion. Through reflection in action, family physicians can learn to act on values to serve patients’ needs. They can also learn to acknowledge frustrations and moral distress resulting from the gaps between the patients’ needs, professional convictions, available resources and dominant institutional norms. The clarity and perspectives gained from reflection in action enable family physicians to become coaches, advocates and change agents. Family physicians can practice reflection in action to provide the best care for all patients and to transform healthcare to become more just and equitable. In the case of Sandy, reflection in action facilitated acknowledgement of her very real fears and anxieties. Reassured that her breathing was normal, she received information to access local food banks, rent support and social services. Her urgent appointment was a step towards continuity and follow-up visits to address health maintenance and provide support. Readings Epstein RM. Mindful practice. JAMA 1999;282:833–9. doi: 10.1001/jama.282.9.833 Shapiro J, Talbot Y. Applying the concept of the reflective practitioner to understanding and teaching family medicine. Fam Med 1991;23:450–6. Shapiro J. The feeling physician: educating the emotions in medical training. Eur J Pers Cent Healthc 2013;1:310–6. Epstein RM. Mindful practice. JAMA 1999;282:833–9. doi: 10.1001/jama.282.9.833 Shapiro J, Talbot Y. Applying the concept of the reflective practitioner to understanding and teaching family medicine. Fam Med 1991;23:450–6. Shapiro J. The feeling physician: educating the emotions in medical training. Eur J Pers Cent Healthc 2013;1:310–6. Jéssica Leão and Don Nease Balint groups—peer-led group discussions that review emotionally challenging encounters with patients—can enhance clinicians’ abilities to nurture their therapeutic relationships with patients. One of the main ways to enrich clinician–patient relationships is through Balint groups, named after Michael and Enid Balint. Michael trained and practised psychiatry in Hungary prior to immigrating to the UK just before World War II. Enid was trained in the UK as a social worker with a keen eye for assessing group dynamics and interpersonal relationships. On Michael’s arrival at London’s Tavistock Clinic, both worked to raise awareness of the therapeutic importance of the doctor–patient relationship in the generalist practice of medicine. The Balints sought to use a group-based case discussion format as a crucible to reproduce the forces at work in relationships between generalist doctors and their patients . The structure was basic: a general practitioner (GP) would present a troubling case, and the other doctors in the group shared their respective perspectives on concerns of relational interest. There was little focus on the medical aspects of the case. What emerged during these group sessions was the recognition that the main therapeutic agent GPs used in their interactions with patients was none other than themselves—doctors. The main ‘drug’ employed by GPs, especially in challenging encounters, was the manner by which they attended to their patients’ expressed problems. The key goals of Balint groups were thus to help the physicians in attendance (1) recognise the healing value of their relationships with patients; (2) develop a commitment to hone their relational skills as a therapeutic tool; (3) accept that, as with all medical treatments, there are risks and benefits to any intervention (including relational ones) and (4) work to minimise side effects and maximise the effectiveness of their relational interactions with patients. Balint groups are still, for the most part, run in the same fashion as started by the Balints. A clinician presents a bothersome patient case—in reality, a bothersome patient relationship—to their peers in the group. Groups meet regularly, and one or two group leaders guide the ensuing discussion, encouraging group members to articulate emotional responses to the case presentation. As trust develops between group members, often deeply intimate aspects of human concern emerge. Much of the work of such groups is intrapersonal in nature—physicians organically learn more about their own emotional intelligence and relational agency through dynamic group reflection in a supportive environment. Traditionally, medical education reduces the patient in question to the study of disease and promotes distance between doctors and patients. Thus, doctors often fail to recognise the importance of relationships to the therapeutic process. Few see themselves as potential agents of healing. In that one key element of family medicine is its focus on clinician–patient relationships, however, Balint groups can help. They can help family doctors develop ways to deal with the feelings that challenging encounters with patients can stimulate. They also help promote empathy and lessen the risk of burn-out. Most important, they are one way for family physicians to reconnect with their own personal therapeutic power and bring that power to their encounters with patients. Readings Balint M. The doctor, his patient, and the illness. Lancet 1955;268:683–8. doi: 10.1016/s0140-6736(55)91061-8 Lichtenstein A. Integrating intuition and reasoning—how Balint groups can help medical decision making. Aust Fam Physician 2006;35:987–9. Roberts M. Balint groups: a tool for personal and professional resilience. Can Fam Physician 2012;58:245–7. Balint M. The doctor, his patient, and the illness. Lancet 1955;268:683–8. doi: 10.1016/s0140-6736(55)91061-8 Lichtenstein A. Integrating intuition and reasoning—how Balint groups can help medical decision making. Aust Fam Physician 2006;35:987–9. Roberts M. Balint groups: a tool for personal and professional resilience. Can Fam Physician 2012;58:245–7. Bill Ventres, Liz Grant, Stewart Mercer and John Gillies To show compassion—a cognitive and emotional act—one must recognise distress in others and feel moved to validate and reduce this suffering. Compassion is an important therapeutic skill when working with patients; physicians should work to cultivate self-compassion as well. Compassion is an important therapeutic skill, yet the experience and expression of compassion remain one of those ‘I know it when I see it’ concepts. Patients can sense when their clinicians are genuinely compassionate and when they are not. Psychologically, being compassionate in medicine means (1) understanding the connections between illness and suffering and (2) recognising suffering in individual patients and, collectively, in communities; it also means holding uncomfortable feelings while working to alleviate suffering. Practically, being compassionate means approaching patient encounters with virtuous intent, a willingness to listen to patients’ stories in an attempt to fathom their experiences, and a readiness to forge healing alliances to help ameliorate suffering. Many words touch on key elements of compassion. Care, empathy, respect, kindness and consideration are but a few, and scholars of medicine and the humanities have dedicated significant effort to tease out the essential features of these elements. They have also worked to transform the idea of compassion into practical applications; several models of compassionate behaviour have emerged from these efforts. Here, we approach compassion differently. We focus on the need to cultivate compassion within the community of medical professionals. We believe that although words and applications are important to learn and apply, family physicians and other clinicians are more likely to express compassion authentically by attending to several personal building blocks : Notice what is happening —It is often easier to focus on the strictly biomedical aspects of medical work, isolating it from the other aspects of patients’ lives; be curious and open-minded vis-à-vis patients. Accept the power of emotions —Feelings inevitably shape how patients experience illness and often influence whether patients participate in healthy decision-making; acknowledge this fact. Cherish feelings of empathy —Traditionally, physicians have been taught to remain emotionally neutral in respect to patients; accepting their own human nature and the emotions that go along with that recognition can help physicians engage with patients as appropriate. Dare to act with kindness —Create opportunities to build and grow thoughtful kindheartedness towards patients, families, colleagues and others. Risk receiving compassion in return —No one works in isolation; rise above challenges with the help of others and reciprocally help others rise above their own challenges. The experience and expression of compassion reflect a mixture of knowledge, attitudes, skills, intentions and relational attributes . Compassion is a learnt belief, not an automatic response. It is a wish to be helpful that involves a process of discernment and reasoning. It is a learnt habit that physicians are compelled to provide. Compassion is therapeutic for both patients and practitioners; it enhances trust, improves medical outcomes and increases clinicians’ joy of practice. Readings Halpern J. What is clinical empathy? J Gen Intern Med 2003;18:670–4. doi: 10.1046/j.1525–1497.2003.21017 .x Mercer SW, Reynolds WJ. Empathy and quality of care. Br J Gen Pract 2002;52:S9-12. Rakel RE. Compassion and the art of family medicine: from Osler to Oprah. J Am Board Fam Pra ct 2000;13:440–8. doi: 10.3122/15572625-13-6-440 Halpern J. What is clinical empathy? J Gen Intern Med 2003;18:670–4. doi: 10.1046/j.1525–1497.2003.21017 .x Mercer SW, Reynolds WJ. Empathy and quality of care. Br J Gen Pract 2002;52:S9-12. Rakel RE. Compassion and the art of family medicine: from Osler to Oprah. J Am Board Fam Pra ct 2000;13:440–8. doi: 10.3122/15572625-13-6-440 Pablo González Blasco, Maria De Benedetto, Graziela Moreto and Marcelo Levites Engaging a humanistic outlook is key to caring for patients in family medicine. The humanities offer a means to cultivate this outlook. Guidelines, outcomes and clinical trials are at the forefront of medical training and practice. Objective knowledge is considered scientific, and valuable emerging technologies often monopolise students’ learning efforts and practitioners’ clinical attentions. Subjective information—key to a humanistic approach to doctoring—is regularly thought to be soft and second-rate; however, the idea that subjective information is of lesser value is not only false but also an impediment to relieving suffering and promoting health. Doctors exist to care for patients. This care clearly includes the ability to collect informative histories, perform thorough physical examinations, choose and interpret suitable diagnostic studies, apply technical knowledge in search of appropriate treatments and adeptly perform necessary procedures. However, caring for patients also implies that clinicians appreciate the people they serve and work to understand the human condition, including the effects of such circumstances as sickness, suffering and—ultimately—death, as well as those of recovery and renewal. Especially important for family physicians and other generalist clinicians, the humanities help doctors cultivate humanistic approaches to caring for patients. The humanities provide a source of insight and understanding, enabling doctors to understand patients in the context of their lived experiences. Rather than just an appendage to medical knowledge and the application of clinical skills, the humanities are necessary instruments in the therapeutic toolbox of proper doctoring. Without the humanities and the humanistic spirit they engender, doctors would simply act as mechanics trained to fix patients’ immediate presenting problems and not as the compassionate professionals patients hope for. Integrating the humanities into family medicine education and practice can take many forms. Literature, theatre, poetry, opera, movies and even music can help promote consideration of personal values in the face of life’s challenges. Stories—personal narratives—can serve as launching points for emotionally rich discussions and ethical reasoning. Art, in all its sensory forms, can stimulate both emotion and imagination, which through reflection and dialogue can in turn sharpen awareness, enhance empathy, and facilitate a constructive approach to uniting the affective and cognitive facets of patient care into one wise therapeutic process—plain doctoring, the generalist practice of family medicine. Family medicine is an art that recognises the uniqueness of each patient: it considers pathology and the way in which pathology is experienced by any one person. Such a practice necessitates uniting a humanistically informed approach to patient concerns with the traditional biomedical, disease-oriented approach. By incorporating the humanities, family physicians can provide person-centred medicine, an elegant exercise that merges science and art in service of holistic care . Readings Gordon J. Medical humanities: to cure sometimes, to relieve often, to comfort always. Med J Aust 2005;182:5–8. Kumagai AK. Perspective: acts of interpretation: a philosophical approach to using creative arts in medical education. Acad Med 2012;87:1138–44. doi: 10.1097/ACM.0b013e31825d0fd7 Shapiro J. Perspective: Does medical education promote professional alexithymia? A call for attending to the emotions of patients and self in medical training. Acad Med 2011;86:326–32. doi: 10.1097/ACM.0b013e3182088833 Gordon J. Medical humanities: to cure sometimes, to relieve often, to comfort always. Med J Aust 2005;182:5–8. Kumagai AK. Perspective: acts of interpretation: a philosophical approach to using creative arts in medical education. Acad Med 2012;87:1138–44. doi: 10.1097/ACM.0b013e31825d0fd7 Shapiro J. Perspective: Does medical education promote professional alexithymia? A call for attending to the emotions of patients and self in medical training. Acad Med 2011;86:326–32. doi: 10.1097/ACM.0b013e3182088833 Jen DeVoe The work of family medicine is often an intimate one. Why? Because working with patients means a closeness of spirit, hope and—inevitably—loss. A few weeks into the COVID-19 pandemic, I logged into a virtual visit with my 80-year-old patient Henry. He was to see me for a preventive health visit following a gruelling but successful 18-month battle with lymphoma. His cancer was in remission. I expected our conversation to be one of celebration. Instead, I learned his wife of 61 years was dying of pancreatic cancer. I did not know what to say. After fumbling my way through a few condolences, I managed to ask Henry how he was doing. After a long pause, he responded. ‘My wife is dying right before my eyes, and I can’t do a damn thing about it.’ I learned that Henry’s wife had recently entered hospice and that all her care was virtual; no one was visiting due to COVID precautions. I recommended we schedule routine calls for blood pressure monitoring; these calls would give me a chance to talk with him regularly. At our next call, after briefly chatting about his blood pressure, I again asked, ‘How are you doing?’ Barely audible, he stammered, ‘I can’t live without her. My heart is broken.’ Henry and I had a few more calls during his wife’s last weeks of life, and then she was gone. When he called to inform me, I could hear in his voice that it was the beginning of the end for him. I recommended we continue our follow-up calls, mostly so I could offer Henry grief support. He declined all other services. When it was safe to return to in-person visits, I saw Henry in clinic. We hugged, and he whispered in my ear, ‘Thank you, Dr. DeVoe. That is the first human contact I’ve had in six months.’ Soon after, Henry started falling, alone at home, late at night. Many mornings, upon opening my electronic medical record, I saw his name on my list of patients on our hospital service, for lacerations, broken ribs, and compression fractures. Our routine clinic visits became regular hospital visits. When I recommended that Henry consider moving to an assisted living facility, he politely told me he would never leave his home—a move would dishonour the memory of his wife. Our team scheduled home health services and strategized on ways to offer support. Eventually, a home health nurse informed me he had died at home. When his death certificate arrived in my mailbox, I paused before writing ‘undetermined cause of death’ in the appropriate space. What did I really want to write? ‘Cause of death: broken heart.’ Every time a patient dies, a jumble of emotions fills my mind and my soul. I am grateful to have had the privilege of being the personal physician to these patients, now deceased. I also often struggle. Did I achieve the right balance between working to keep patients alive and helping them die with grace and dignity? Such is the nature of my work in family medicine. Sharing professional intimacy with patients opens the door to many joys; there also exist the inevitable challenges inherent to any close relationship . Readings Woodruff A. Keeping the family in family medicine. Am J Hosp Palliat Care 2021;38:313–4. doi: 10.1177/1049909120933273 Byock I. Suffering and wellness. J Palliative Med 2009;12:785–7. doi: 10.1089/jpm.2009.9568 Yeo M, Longhurst M. Intimacy in the patient-physician relationship. Committee on Ethics of the College of Family Physicians of Canada. Can Fam Physician 1996;42:1505–8. Woodruff A. Keeping the family in family medicine. Am J Hosp Palliat Care 2021;38:313–4. doi: 10.1177/1049909120933273 Byock I. Suffering and wellness. J Palliative Med 2009;12:785–7. doi: 10.1089/jpm.2009.9568 Yeo M, Longhurst M. Intimacy in the patient-physician relationship. Committee on Ethics of the College of Family Physicians of Canada. Can Fam Physician 1996;42:1505–8. Bill Phillips, Jane Uygur and Tom Egnew ’Healthcare is more about love than about most other things. It is built on the relationship between physician and patient, one in which the physician works to relieve the suffering of the patient. —Donald Berwick, US paediatrician and healthcare consultant Patients experience suffering when they perceive a threat to their integrity as whole persons. When illness threatens the body, mind or spirit, it is the physician’s duty to identify, manage and relieve suffering. As doctors, we see suffering in the faces of our patients and hope to understand the experience of illness in their lives. Through our quest to know disease, understand people and comprehend health, we work towards finding a way to relieve suffering. Patients suffer because of medical problems or treatments, as illuminated by patient narratives, research in a variety of specialties, and the insights of nurses, social workers, mental health professionals, and others on our caring teams. As comprehensive physicians, family doctors traverse whole landscapes of human health and illness and see patients and their suffering in the full context of their lives: work, families and communities. We walk with patients through their days, seeing suffering across problems, through time and over the span of human life. Thus, the perspective of family medicine is a natural foundation for navigating the breadth and depth of suffering. The biopsychosocial model is the map of the territory of health, illness and suffering. Patient-centred care is the compass to find the way into—and perhaps out of—each patient’s personal experience of sickness and suffering. We suggest family physicians use a comprehensive clinical model of suffering to translate this multidimensional perspective into clinical action reflecting multigenerational experience, cross-specialty responsibility and interdisciplinary synthesis . At the core of this model, we see that suffering arises when illness and distress threaten loss. Loss—or fear of loss—can lead to despair and isolation. Physicians must first identify signs of distress and recognise patients’ suffering. This requires the skills, time and care to observe and see, to listen and hear. Every patient is unique, illness is complex and suffering is personal. Suffering can manifest in any or multiple domains of life. It can arise from (1) troubling symptoms, (2) loss of function, threats to (3) roles and (4) relationships, distressing (5) thoughts and (6) emotions, (7) disruptions to the narratives of patients’ life stories and (8) conflicts with patients’ spiritual or intellectual worldviews. These eight domains of suffering can be organised for clinical care, teaching and research on four axes: biomedical, sociocultural, psychobehavioural and existential. This comprehensive model helps organise the inquiry. It serves the clinician as a ROS—not a ‘review of systems’ but a deeper ‘review of suffering.’ Our goal is to see patients’ particular views of illness and to understand their unique experiences in the full context of their lives. To help heal, we need to comprehend how their sense of wholeness as a person is threatened. The chief aim of medicine is to alleviate suffering. By recognising the patients’ suffering, we can offer care and hope. Sometimes it can be hope for a cure—always it is hope for control of symptoms, relief of distress and emotional support. By understanding suffering, we can better help patients rediscover meaning, gain acceptance and reconstitute wholeness. Readings Cassel EJ. The nature of suffering and the goals of medicine. N Engl J Med 1982;306:639–45. doi: 10.1056/NEJM198203183061104 Egnew TR. Suffering, meaning, and healing: challenges of contemporary medicine. Ann Fam Med 2009;7:170–5. doi: 10.1370/afm.943 Phillips WR, Uygur JM, Egnew TR. A comprehensive clinical model of suffering. J Am Board Fam Med 2023;36:344–55. doi: 10.3122/jabfm.2022.220308R1 Cassel EJ. The nature of suffering and the goals of medicine. N Engl J Med 1982;306:639–45. doi: 10.1056/NEJM198203183061104 Egnew TR. Suffering, meaning, and healing: challenges of contemporary medicine. Ann Fam Med 2009;7:170–5. doi: 10.1370/afm.943 Phillips WR, Uygur JM, Egnew TR. A comprehensive clinical model of suffering. J Am Board Fam Med 2023;36:344–55. doi: 10.3122/jabfm.2022.220308R1 Tom Egnew and Bill Phillips By recognising and addressing the existential challenges of chronic, serious, or terminal illness, physicians can help patients find meaning, transcend their suffering and achieve holistic healing. Family physicians provide comprehensive healthcare for patients from cradle to grave. Most patient encounters involve little or no overt suffering. With self-limited or curable disease, the patient recovers and resumes their journey on the road of life. With more serious, chronic or terminal illnesses, greater distress and disability challenge the patient’s sense of integrity as a person. Losses mount, personhood is threatened and suffering deepens. Illness is a personal passage from the realm of health into the regions of sickness. Suffering arises when a person believes that they can no longer be the person they have known themself to be. Suffering evolves not only from biophysical changes but also from threats to any aspect of personhood, including the psychological, social, vocational and spiritual. It is rooted in the meaning a person ascribes to such changes. Suffering reflects existential challenge, interpreted through the patient’s narrative and personal story of brokenness. Medicine’s foundational goals are to cure when possible, comfort always, relieve suffering and heal patients. Most biomedical discussions of healing focus on tissue repair and the diagnosis, treatment and cure of disease. Illness is more than disease, understanding is more than diagnosis and care is more than treatment. Holistic healing is more than repairing tissues and curing disorders. Holistic healing can be defined as the personal experience of the transcendence of suffering. Transcendence occurs when patients discover meaning in or come to accept their changed circumstances. When patients find meaning and acceptance in their experience, transcendence of suffering can lead to holistic healing despite incurable disease, debilitating impairment or impending death. While patients must find healing themselves, physicians can assist them along their paths. Doctors can turn towards the patient’s suffering, listen attentively to their struggles and help them refocus and reclaim that which brings meaning and purpose in their lives. This requires mindful management of one’s own anxiety, willingness to share some of the patient’s distress and courage to control the interventional imperative inherent in the culture of medicine. As trusted and empathetic witnesses, physicians can ease some of the isolation of suffering and foster hope. Sharing a history of continuous, comprehensive care steeped in deep contextual knowledge empowers family physicians to help patients navigate their experiences of illness. Using narrative medicine skills, they can guide dialogue to assist patients in editing their stories, perceiving new meaning in life, finding acceptance and reconstituting a sense of wholeness. In caring for patients across the lifespan, family physicians engage the full spectrum of illness, loss, crisis and death. These transitions create existential challenges in patients’ lives and new challenges for their caregivers and loved ones. Comprehensive care calls family physicians to recognise suffering, manage effective responses to patients’ needs and contribute to healing. Caring for patients in this way can be challenging. Yet physicians who explore patients’ experiences of serious illness and help them edit their stories of brokenness often discover this care to be some of the most fulfilling work of their careers . Readings Toombs SK. Healing and incurable illness. Humane Med 1995;11:98–103. Hsu C, Phillips, WR, Sherman KJ, Hawkes R, Cherkin DC. Healing in primary care: vision shared by patients, physicians, nurses and clinical staff. Ann Fam Med 2008;6:307–14. doi: 10.1370/afm.838 Scott JG, Cohen D, DiCicco-Bloom D, Miller WL, Stange KC, Crabtree BF. Understanding healing relationships in primary care. Ann Fam Med 2008 ;6:315–22. doi: 10.1370/afm.860 Toombs SK. Healing and incurable illness. Humane Med 1995;11:98–103. Hsu C, Phillips, WR, Sherman KJ, Hawkes R, Cherkin DC. Healing in primary care: vision shared by patients, physicians, nurses and clinical staff. Ann Fam Med 2008;6:307–14. doi: 10.1370/afm.838 Scott JG, Cohen D, DiCicco-Bloom D, Miller WL, Stange KC, Crabtree BF. Understanding healing relationships in primary care. Ann Fam Med 2008 ;6:315–22. doi: 10.1370/afm.860 Colette Stanley Family medicine is built on stories. Our visits with patients create the foundation on which these narratives are built. Patients share their histories with us, those of present concerns and past experiences: these are the stories of their lives. Throughout our training we, as family physicians, hone our abilities to listen to these stories, retell them during rounds and rewrite them in our notes. These stories are truly why we are family physicians. I became a family doctor because I reveled in the idea of walking with someone through various stages of their life. For me, there is nothing more rewarding than caring for a woman during her pregnancy, delivering her baby, and sharing the joy as that baby achieves all their milestones over time. At the other end of life’s spectrum, it is an honor to care for someone through health and sickness, helping them find comfort as they face their mortality. Becoming an integral part of patients’ life stories is a gift to be treasured. Sometimes the stories we hear are heavy burdens carried from room to room as we navigate our days. I once sat with a patient during a prenatal visit, listening to her tell me that she hadn’t felt her baby move in a few days. My worst fears of fetal demise were confirmed on an in-office ultrasound. After sending her off to Labor & Delivery via ambulance, I walked into my next patient’s room. He was a 70-year-old man who smiled at me as I walked in, eager to show me his improved blood sugars over the last month. The emotional roller coaster can be overwhelming at times. Our jobs are difficult, but often also rewarding in the most surprising ways. Making the right diagnosis is great for my ego, but it’s the relationship-building that keeps a lasting smile on my face day after day. On a particularly challenging day in clinic, I walked into my two year old’s well child check-up. He was pulling at everything he could reach in the room as mom attempted to reel him in. My laptop seemed to catch his eye as I sat down while mom told me about plans for his upcoming birthday party. The little guy walked over and motioned to be picked up. I sat him on my lap as mom repeatedly apologized. Little did she know, this was the highlight of my day. There is much that sidetracks us from our core family medicine values, creating for us a version of mission impossible: insurance companies, rising costs of vital medications, RVUs (relative value units, measures of clinical productivity) and never-ending piles of paperwork are all contributing agents. To re-centre ourselves amid these distractions, we need to find ways to stay true to our purpose—reconnecting with why we embarked on this mission in the first place. Listening to and relating stories are what keep me grounded . Hearing and recounting these narratives serve to release the emotions I do not usually get to process as I am hustling from patient to patient, bearing witness to people’s suffering and joy. It’s a therapeutic process that keeps me rooted in my ‘why’—my purpose in practice—a constant reminder of who the lead characters really are in my own professional story: my patients. Readings Borkan J, Reis S, Medalie J. Narratives in family medicine: tales of transformation, points of breakthrough for family physicians. Fam Syst Health 2001;19:121–34. Ventres W, Gross P. Getting started: a call for storytelling in family medicine education. Fam Med 2016;48:682–7. Verghese A. The physician as storyteller. Ann Intern Med 2001;135:1012–7. Borkan J, Reis S, Medalie J. Narratives in family medicine: tales of transformation, points of breakthrough for family physicians. Fam Syst Health 2001;19:121–34. Ventres W, Gross P. Getting started: a call for storytelling in family medicine education. Fam Med 2016;48:682–7. Verghese A. The physician as storyteller. Ann Intern Med 2001;135:1012–7. |
Effect of holmium laser prostatectomy on surgical outcomes of primary bladder neck obstruction | dd9aacb6-b64e-46c4-8dcc-122ee3f6f1cc | 11837319 | Surgical Procedures, Operative[mh] | Benign prostatic hyperplasia (BPH) is the most common cause of lower urinary tract symptoms (LUTS) in older men . However, other causes of LUTS exist, including overactive bladder, urethral stricture, prostatitis, urinary tract infection, and neurogenic bladder dysfunction . Primary bladder neck obstruction (PBNO) causes LUTS without BPH . This condition is considerably rare and not fully understood by urologists . To date, literature on the natural course, etiology, and presentation of PBNO is limited . PBNO has not been properly established as a disease entity, leading to a significant number of misdiagnoses in clinical practice . The clinical presentation of PBNO includes various symptoms such as voiding, storage, and pelvic pain and discomfort . Videourodynamic study (VUDS) was considered the gold standard diagnosis for PBNO . However, challenges, such as radiation exposure and high costs associated with performing VUDS to diagnose PBNO in clinical practice, exist . Alpha-blockers are the first-line treatment for PBNO . Surgical treatment mainly involves transurethral incision of the bladder neck . However, to our knowledge, no studies are available on surgical treatment for PBNO using a holmium laser. Since 2018, we have diagnosed PBNO using cystourethroscopy and treated it with a holmium laser prostatectomy. This study aimed to evaluate the efficacy and safety of holmium laser prostatectomy in patients diagnosed with PBNO compared to those diagnosed with BPH. Patients This study included patients who underwent holmium laser prostatectomy or holmium laser enucleation of the prostate for PBNO and BPH at the Seoul National University Hospital between January 2018 and August 2022. Patients in both groups were managed following the same clinical protocol. This study was approved by the Institutional Review Board (IRB) of Seoul National University Hospital. (IRB No.0810-027-260, IRB NO.2407-103-1553). The inclusion criteria for the PBNO were as follows: Patients aged ≥ 50 y who presented with moderate to severe LUTS who visited the urology outpatient clinic; patients with typical cystourethroscopic findings for PBNO; and patients with a total prostate volume < 40 mL assessed by transrectal ultrasound (TRUS) imaging. The sagittal view on TRUS showed bladder neck elevation in most patients with PBNO (Fig. B). We defined typical cystourethroscopic findings for PBNO as follows: High bladder neck when viewed horizontally from the verumontanum (Fig. < link rid="fig1”> A- ); the finding of annular narrowing of the bladder neck opening (Figs. A and ); isolated median lobe hypertrophy (median bar) was excluded. These findings were obtained using 30° rigid cystourethroscopy (Karl Storz Hopkins) in a lithotomy position. The inclusion criterion for the BPH group was patients aged ≥ 50 y, with clinical diagnosis of BPH. The exclusion criteria for the PBNO and BPH groups were the presence of genitourinary cancer, history of surgery, urethral stricture, urinary tract infection (UTI), interstitial cystitis, and neurogenic bladder dysfunction. Patients with minimal neuropathy, which was determined to have a negligible or minimal impact on LUTS by medical history and physical examination, were included in this study. This study included patients who underwent holmium laser prostatectomy or holmium laser enucleation of the prostate for PBNO and BPH at the Seoul National University Hospital between January 2018 and August 2022. Patients in both groups were managed following the same clinical protocol. This study was approved by the Institutional Review Board (IRB) of Seoul National University Hospital. (IRB No.0810-027-260, IRB NO.2407-103-1553). The inclusion criteria for the PBNO were as follows: Patients aged ≥ 50 y who presented with moderate to severe LUTS who visited the urology outpatient clinic; patients with typical cystourethroscopic findings for PBNO; and patients with a total prostate volume < 40 mL assessed by transrectal ultrasound (TRUS) imaging. The sagittal view on TRUS showed bladder neck elevation in most patients with PBNO (Fig. B). We defined typical cystourethroscopic findings for PBNO as follows: High bladder neck when viewed horizontally from the verumontanum (Fig. < link rid="fig1”> A- ); the finding of annular narrowing of the bladder neck opening (Figs. A and ); isolated median lobe hypertrophy (median bar) was excluded. These findings were obtained using 30° rigid cystourethroscopy (Karl Storz Hopkins) in a lithotomy position. The inclusion criterion for the BPH group was patients aged ≥ 50 y, with clinical diagnosis of BPH. The exclusion criteria for the PBNO and BPH groups were the presence of genitourinary cancer, history of surgery, urethral stricture, urinary tract infection (UTI), interstitial cystitis, and neurogenic bladder dysfunction. Patients with minimal neuropathy, which was determined to have a negligible or minimal impact on LUTS by medical history and physical examination, were included in this study. The diagnostic workup included a physical examination, including digital rectal examination, assessment of symptoms using the International Prostate Symptom Score (IPSS) and Overactive Bladder Symptom Score (OABSS), urinalysis, and urine culture to exclude UTI, TRUS to measure prostate volume, uroflowmetry with ultrasound measurement of post-void residual urine volume and prostate-specific antigen (PSA) test. In cases where nodules were palpable on DRE or elevated PSA levels clinically indicated suspicion of prostate cancer, a TRUS-guided prostate biopsy was performed. After confirming a negative result for prostate cancer in the pathological report, surgery was performed on different dates. A urodynamic study (UDS) was performed on all patients . The bladder contractility index (BCI) was defined as PdetQmax + 5Qmax . We defined patients with a BCI < 100 as having detrusor underactivity (DUA) and those with a BCI of ≥ 100 as having non-DUA . The surgical outcomes included operative time, enucleation time, morcellation time, and extracted prostate volume. Perioperative outcomes included the duration of Foley catheterization, length of hospital stay after surgery, and surgical pathology. The IPSS, OABSS, and uroflowmetry data were measured at 2 weeks, 3 mo, and 6 mo postoperatively. Patient-reported subjective satisfaction with the surgical outcomes was assessed 6 mo postoperatively . Postoperative complications were evaluated at 2 weeks, 3 mo, and 6 mo using the Clavien-Dindo classification. The patient was placed in the lithotomy position under spinal or general anesthesia. The Ho: YAG laser (VersaPulse PowerSuite 100 W, Lumenis Pulse™ 120 H, Yokneam, Israel) was set to 80 W (2 J, 40 Hz). The three-lobe technique is primarily used when a clear surgical plane for enucleation is identified . The median lobe was first enucleated. Initial incisions were made on both sides of the verumontanum to identify the capsular plane. The surgical plane of capsule is characterized by circular fibers running in the transverse direction. Longitudinal incisions were made at the 5 o’clock and 7 o’clock positions of the bladder neck, connecting with the previous incisions. Transverse incision was made immediately above the verumontanum to enucleate the median lobe. Resection of both lateral lobe was performed for cases in which the surgical plane in the lateral lobes was not identifiable during enucleation. Resection of each lateral lobe was then started distally at the verumontanum, and the lower limit was defined with incisions on both sides of the initial incision at the verumontanum. A prostatic mucosal incision was performed at the 1 and 11 o’clock positions over the entire length of the prostate to define the margin of the lateral lobe resection. The lobe was then released, starting distally, until only a 12 o’clock mucosal bridge remained at the bladder neck . After meticulous bleeding control in the prostatic fossa, morcellation was performed using a 26-Fr nephroscope and a tissue morcellator (Versacut™, Lumenis). A 22-Fr 3-way Foley catheter was placed under continuous irrigation and removed on the first postoperative day. The patients were discharged typically on the first day after surgery. Statistical analysis All variables were expressed as mean ± standard deviation. Propensity matching was performed when a significant difference was observed in the sample size between the BPH and PBNO groups. For the comparison of clinical parameters between the two groups, paired t-tests were used for continuous variables and chi-square tests for categorical variables. Within each group, changes in postoperative functional outcomes were compared using paired t-tests and chi-squared tests for continuous and categorical variables, respectively. Statistical significance was set at p < 0.05. All variables were expressed as mean ± standard deviation. Propensity matching was performed when a significant difference was observed in the sample size between the BPH and PBNO groups. For the comparison of clinical parameters between the two groups, paired t-tests were used for continuous variables and chi-square tests for categorical variables. Within each group, changes in postoperative functional outcomes were compared using paired t-tests and chi-squared tests for continuous and categorical variables, respectively. Statistical significance was set at p < 0.05. Patient demographics and operative and perioperative outcomes Twenty-eight patients with PBNO and 447 with BPH were identified (Table ) (Fig. ). The mean age of the PBNO group was 67.9 (± 6.5) y, and the mean total prostate volume was 32.0 (± 8.8) mL. No significant differences were observed in the baseline total IPSS and OABSS between the PBNO and BPH groups ( p = 0.47, p = 0.38). On preoperative UDS, detrusor underactivity was significantly more prevalent in the PBNO group (78.6%) than in the BPH group (57.5%) ( p < 0.01). The total operation time was shorter in the PBNO group [26.7 (± 9.5) min] than in the BPH group [61.4 (± 32.0) min] ( p < 0.01). The Bladder Outlet Obstruction Index in the BPH group and the PBNO group was 38.4 (± 15.9) and 30.7 (± 15.9), respectively, showing no significant difference ( p = 0.16). Twenty-eight patients with PBNO and 447 with BPH were identified (Table ) (Fig. ). The mean age of the PBNO group was 67.9 (± 6.5) y, and the mean total prostate volume was 32.0 (± 8.8) mL. No significant differences were observed in the baseline total IPSS and OABSS between the PBNO and BPH groups ( p = 0.47, p = 0.38). On preoperative UDS, detrusor underactivity was significantly more prevalent in the PBNO group (78.6%) than in the BPH group (57.5%) ( p < 0.01). The total operation time was shorter in the PBNO group [26.7 (± 9.5) min] than in the BPH group [61.4 (± 32.0) min] ( p < 0.01). The Bladder Outlet Obstruction Index in the BPH group and the PBNO group was 38.4 (± 15.9) and 30.7 (± 15.9), respectively, showing no significant difference ( p = 0.16). The postoperative functional outcomes and results of the three self-administered questionnaires for the PBNO and BPH groups are presented in Table and Fig. . The total IPSS significantly improved at 2 weeks postoperatively compared to the preoperative values in both the PBNO and BPH groups ( p < 0.01), whereas no significant differences were observed in the OABSS between the preoperative and 2-week postoperative assessments ( p = 0.27, p = 0.32). The PBNO and BPH groups showed significant improvements in total IPSS, OABSS, and Qmax at 3 and 6 mo postoperatively compared with preoperative values ( p < 0.01). However, the PBNO group exhibited less improvement in the IPSS voiding score and maximum flow rate (Qmax) at 3 and 6 mo postoperatively than the BPH group ( p < 0.01). Additionally, at 6 mo postoperatively, the total IPSS was higher in the PBNO group [10.5 (± 7.9)] than in the BPH group [6.0 (± 4.7)] ( p = 0.07). No significant differences were observed in OABSS at 6 mo postoperatively between the PBNO group [3.4 (± 2.3)] and the BPH group [3.3 (± 2.5)] ( p = 0.81). At 6 mo postoperatively, the proportion of patients in the BPH group who responded positively to the satisfaction with treatment question (STQ) was higher than that in the PBNO group; however, this difference was not statistically significant (STQ: 60.7% vs. 92.2%, p = 0.087). Similarly, a higher proportion of patients in the BPH group showed a positive response to the overall response assessment (ORA) and the willingness to undergo surgery question (WSQ) compared to the PBNO group; however, there were no statistical differences [ORA: 82.1% vs. 94.0%, p = 0.566; WSQ: 53.6% vs. 88.4%, p = 0.093]. The PBNO group showed one case of recatheterization at 2 weeks postoperatively ( n = 1, 3.5%) (Table ). None of the patients required a blood transfusion or transurethral coagulation. During the follow-up period of up to 6 months postoperatively, there were no complications of bladder neck contracture or urethral stricture in both groups. PBNO is characterized by the bladder neck not opening sufficiently during urination, leading to obstructed urinary flow without any anatomical obstruction, such as benign prostate enlargement or urethral stenosis . There is no universal agreement or diagnosis for PBNO. Traditionally, diagnosis is achieved by coupling the outcomes of UDS with radiographic visualization of the bladder neck area (Fig. C) . The urodynamic outcome is characterized by outlet obstruction accompanied by a reduction in Qmax (10–15 mL/s; normal value > 18 mL/s), high-pressure detrusor contractility, and increased intravesical pressure . Nitti et al. categorized PBNO into the following three distinct types: (1) classic high-pressure, low-flow voiding; (2) normal-pressure, low-flow voiding with narrowing of the bladder neck; and (3) delayed opening of the bladder neck. These three classifications indicate vesical neck dysfunction, resulting in functional obstruction . Other previous studies have characterized the urodynamic outcomes of PBNO. However, no universally accepted definition of the video-UDS for PBNO exists. Furthermore, video-UDS is limited by its low availability, high cost, and radiation exposure . Girolamo et al. attempted to diagnose PBNO using MR voiding cystourethrography to overcome the limitations of the video-UDS . MR voiding cystourethrography offers advantages such as reduced radiation exposure and elimination of the need for cannulation maneuvers of the urethra . However, a limitation of MR voiding cystourethrography is that the diagnostic tool is unfamiliar to urologists and requires a specialist. In contrast, cystourethroscopy has advantages over video-UDS regarding accessibility and cost, and it is more familiar to urologists. In this study, PBNO was diagnosed when the bladder neck was not visible when viewed horizontally from the verumontanum using rigid cystourethroscopy (Fig. < link rid="fig1”> A- ). The bladder neck opening shows annular narrowing (Figs. A and ). Using the criteria for diagnosing PBNO with cystourethroscopy, as outlined in this study, could help urologists to diagnose PBNO in their clinical practice. To our knowledge, only three studies have reported transurethral incision (TUI) for PBNO. Kochakarn et al. performed a unilateral TUI of the bladder neck in patients with PBNO ( n = 35) . Follow-up was conducted for up to 1 y postoperatively, and their retrospective analysis showed that the IPSS and Qmax significantly improved. Yang et al. conducted a prospective study ( n = 33) after TUI of the bladder neck, with preservation of the supramontanal tissue . IPSS and Qmax significantly improved 2 y postoperatively. Mattioli et al. performed a retrospective analysis ( n = 196) of TUI using a thulium laser. The IPSS and Qmax significantly improved 1 y postoperatively . The authors’ study was followed up for 6 mo postoperatively. The total IPSS score and Qmax improved compared to those before surgery and were similar to those in the postoperative outcomes of previous studies . We performed a holmium laser prostatectomy on patients with PBNO; to our knowledge, this is the first study using this surgical method. Holmium laser prostatectomy was performed instead of TUI because of the possibility of recurrence. According to the authors’ previous surgical experience of performing a TUI of the bladder neck in patients diagnosed with PBNO, the functional outcome improved postoperatively; however, symptoms of obstruction recurred during long-term follow-up. On cystourethroscopy, the bladder neck remained in the form of an isolated median lobe because of the previous incision. Accordingly, a secondary prostatectomy was performed to remove the enlarged median lobe. No recurrence of symptoms occurred thereafter. Based on our experience with these cases, we have been performing a holmium laser prostatectomy rather than TUI of the bladder neck in patients with PBNO since 2018. No previous studies have compared the results of surgical treatment for PBNO with those for BPH. In this study, the PBNO group showed less improvement in the IPSS voiding score [5.5 (± 4.8) vs. 2.0 (± 3.4)] and Qmax [14.8 (± 5.6) mL/s vs. 23.6 (± 5.6) mL/s] at 6 mo after surgery compared to the BPH group. In the subjective satisfaction survey 6 mo postoperatively, the PBNO group showed lower satisfaction than the BPH group, although the difference was not statistically significant. The difference in the results of the objective and subjective indicators after surgery between the PBNO and BPH groups was attributed to the fact that the ratio of the DUA among the UDS parameters of the PBNO and BPH groups performed before surgery was higher in the PBNO group (78.6%) than in the BPH group (57.5%). The underlying mechanisms responsible for the observed increased prevalence of preoperative DUA in the PBNO group compared to the BPH group remain unclear, necessitating further investigation. The advantages of this study are as follows: First, unlike previous studies, we compared and analyzed the postoperative outcomes in the PBNO and BPH groups. Second, the two patient groups were registry-based prospective cohorts that included patients who underwent diagnosis and treatment according to the same clinical protocol. This study had some limitations. First, mid-term follow-up was performed until 6 mo postoperatively, and long-term follow-up was not possible. Second, the number of cases in the PBNO group ( n = 28) was relatively small compared with that in the BPH group. Third, preoperative and postoperative sexual function was not assessed, which requires investigation in future studies. Holmium laser prostatectomy was effective and safe for patients with PBNO with elevated subjective patient satisfaction. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Substantia nigra iron deposition in Lewy body disease: an MRI quantitative susceptibility mapping and neuropathology study | c2151b51-3b77-4014-a941-a37b7ba723f9 | 11716564 | Forensic Medicine[mh] | |
Enough with simplifying: “eat less and move more”: at what point are we with the treatment of excess weight in paediatrics? | e08b7e1a-b845-4a90-a562-d94aae879b0b | 11119387 | Pediatrics[mh] | What has changed since the 2020 commentary ? Four years have passed since our 2020 commentary on weight stigma, but almost nothing has changed. Despite the laws of health urgency, the prevalence of obesity continues to increase, stealing years of life and health of a large part of the world population. We have not yet given due value to its social determinants and adverse childhood experiences, nor have we considered the adoption in daily practice of a respectful approach, according to the principles of the motivational interview, as required by the latest US guidelines . We have not yet adopted Person-first language, nor changed stigmatizing images in training. We did not move the assessment of the child with excess weight from mere Body Mass Index (BMI) zscore to evaluation of general health using the Edmonton Obesity Stage System for Pediatrics . We have not changed the goal of care from weight loss to improved health. So we are preparing to deal with obesity intensively, as requested by the WHO (World Health Organization) European Region 2022 document, without the right tools and with the risk of doing more harm than good . Today it is quite fashionable to talk about excess weight in families, schools and healthcare, but few actually do anything to improve the situation (Table ). Already in 2020 we had underlined how it is to fight weight stigma in healthcare, but also in the other educational institutions, like schools and families . Unfortunately, actions in this field are still lacking. We talk about it more and more, but it is clear that talking about it, not only doesn’t work, but can do much more harm . Literature on stigma has grown enormously, finally considering children and families. It has been highlighted that derision affects children and parents (association stigma) since pregnancy, negatively impacting health throughout their lives , and creating an obstacle to treatment . If we truly wish to deal with obesity, as international organizations require, there are many things to do, such as involving political institutions, who currently do not collaborate with each other. Alongside the regulating marketing strategies aimed at children and adolescents and reducing sugar and sweeteners, based on the principle “everyone must do their part!“, as healthcare workers, we must ask ourselves “where to start”. We propose “Training, Networking and Contrasting Weight Stigma”, magic words which must become operational. Training The first move is always “training”, a journey that has already begun. But training must not only concern epidemiology, diagnosis, therapy and complications which, in the absence of early behavioural treatment, are perceived by doctors and patients as faults, diagnosed too late and therefore treated with the usual methods. Training in Italy is mainly entrusted to university structures, which offer it to students, specialists and healthcare professionals. The change of perspective regarding obesity, with the involvement of patients in the development of therapeutic paths, however has not yet been truly adopted by anyone. Only when trainers will have shared the objective of fighting weight stigma and managing obesity as a “chronic disease”, will we be able to carry out a “new” form of training! Furthermore, according to Andragogy, the science of adult education, training must be enjoyable for the learners. But family and hospital paediatricians and paediatric nurses are absolutely convinced that they are well prepared to cure children with obesity : there is nothing new to know, apart from drugs to be used with caution in developmental age, the usual recommendations, which families already know, and the diagnostics to be prescribed. They believe that nothing can be done to treat obesity, as it is considered just “a waste of time”. Unfortunately, attention to professional and family stigma creates feelings of guilt in professionals, especially the best ones, who defend themselves by denying and shifting attention to other less problematic illnesses. In this way training towards a change of perspective is still lacking . The training courses on the treatment of obesity are carried out with old clichés: not new ways of communicating with the patient, better professional-family-child/young relationships and creating awareness to contrast weight stigma, but rather concentration on breastfeeding, weaning and complications. Usual topics that have already proven to be unsuccessful, while we know that reducing stigma and guilt, although difficult, would improve the quality of life, even if not always weight, especially in the long term. Fighting stigma Obesity, like all chronic diseases, is not healable. Reversing it, when it has become structured and, above all, has become serious, is almost impossible. If it is difficult to regain a “normal” weight, as with all chronic pathologies, with treatment patients may feel better, reduce cardio-metabolic risk, improve sociability and quality of life. It is necessary to re-educate professionals starting from the definition of obesity as a complex, progressive and relapsing chronic disease with many contributing social and environmental factors (Table ), so to reduce the sense of guilt of both patients and professionals, who must be satisfied with having improved the health of their patients, despite not being able to achieve complete healing. Motivational interviews and therapeutic education are approaches with good evidence of effectiveness, known to healthcare professionals, especially in primary care, but applying them for obesity is more difficult, requiring more attention, skill and empathy. The stigmatization of weight has made healthcare professionals unwittingly blaming and deriding, and patients, repeatedly offended, have become hypersensitive. To change care we must accept patients as equal co-authors in care projects, as already done in other countries for diseases, such as diabetes mellitus and obesity. Without this step, dealing more with obesity could have negative effects. Teaching primary school overweight children, the “mistakes” that occur daily in their families and the complications of excess weight can increase the sense of guilt. Focusing attention on food, weight and body image, favours low self-esteem and eating disorders, which are already frequent . The latest study by Rebecca Puhl’s group , who has been dealing with weight stigma for years, on thousands of questionnaires filled out by parents, teenagers and now also children from various countries around the world, has highlighted the role of stigma on both physical and psychological health, and its devastating effect, if started in childhood. We have discovered that in all families, where members meet mainly for shopping, preparing and eating meals, trying on and buying clothing, the discussion about weight is continuous, especially between parents and overweight children and it usually has negative and harmful tones. In order to improve this situation, a guide will soon be published on the website of the Italian Society of Paediatrics to help parents talk to their children and support them during treatment. Furthermore, the Emilia Romagna Region has developed the BeBa (Benessere Bambino, Childhood Wellbeing) app for families to promote healthy childhood initiatives . Today, paediatricians are asked to create guides to help parents give positive messages about weight and body to their children, or at least to avoid talking about it to protect their psychological health and allow the construction of a positive social identity. These guides are necessary for understanding what to do, instead of just talking about it without doing anything, since talking promotes shame and sadness. But how can paediatricians help parents if they themselves use negative messages? Instead of listening to them and helping them find personalized treatment paths, they load them with guilt and judgement. The list of tips for changing stigma in the healthcare sector is extensive, but still struggles to be adopted (Table ) . Guides for parents on how to talk about weight and what to do to reduce stigma and improve family lifestyle are necessary. Luckily not everything is stalled. Our project to collect questionnaires from parents of children and adolescents with overweight/obesity, started with a group of pediatricians from the Society of Paediatric Endocrinology and Diabetology, and welcomed by 17 Level II centers and 10 Family Paediatricians from the Campania Region, has terminated. Parents and children rated on a 5 Likert scale how motivating and offensive they considered 10 terms commonly used by professionals to talk about weight: weight, excessive weight, unhealthy weight, overweight, obesity, severe obesity, serious obesity, complicated obesity, fat, very fat . Questionnaires of 391 parents and 249 children (range 5–18 years; average age 11), filled out from June 2019 to February 2020, demonstrate that the term “unhealthy weight” is the most motivating and least offensive. The most offensive terms are: obesity and fat. There is no single term that is valid for everyone, so the only practical advice for healthcare professionals is to use the terminology used by the family or ask them. Terms like a bove normal weight, inadequate weight, very robust, unhealthy weight , were the most motivating. Network For years, some family pediatricians and specialists have tried to treat individual patients, but without a proper network, satisfactory and long-lasting results can rarely be achieved with such a disease. The network is essential! Today there is a swarm of small prevention projects that are not coordinated with each other, often sponsored by the food or marketing industry, carried out with small groups and limited timescales, which in the end will not clarify the questions that experts are asking today about how to contrast obesity. Currently in 11 regions, led by Emilia Romagna, a project is underway, supported by the National Center for Disease Prevention and Control (CCM) 2020 of the Ministry of Health, which re-evaluates the positive contents of past projects and seeks to reactivate them, improve them, put them online and spread them to other institutions among Primary Care, Hygiene and Nutrition Services and specialists. A multi-disciplinary and multi-component family-centered therapeutic project, based on the principles of therapeutic education, as supported by all most recent guidelines, has been active in Emilia Romagna for ten years. Family paediatricians, which are the strong point not only in the prevention, but above all in the treatment of obesity, adequately trained in their central role (Table ), are enthusiastic. However, it is necessary to seriously invest in the care of this disease, in order to improve the path, and offer intensive projects with teams trained on social determinants and early adverse events, as proposed by the AAP , without forgetting the transition to adult care. The first move is always “training”, a journey that has already begun. But training must not only concern epidemiology, diagnosis, therapy and complications which, in the absence of early behavioural treatment, are perceived by doctors and patients as faults, diagnosed too late and therefore treated with the usual methods. Training in Italy is mainly entrusted to university structures, which offer it to students, specialists and healthcare professionals. The change of perspective regarding obesity, with the involvement of patients in the development of therapeutic paths, however has not yet been truly adopted by anyone. Only when trainers will have shared the objective of fighting weight stigma and managing obesity as a “chronic disease”, will we be able to carry out a “new” form of training! Furthermore, according to Andragogy, the science of adult education, training must be enjoyable for the learners. But family and hospital paediatricians and paediatric nurses are absolutely convinced that they are well prepared to cure children with obesity : there is nothing new to know, apart from drugs to be used with caution in developmental age, the usual recommendations, which families already know, and the diagnostics to be prescribed. They believe that nothing can be done to treat obesity, as it is considered just “a waste of time”. Unfortunately, attention to professional and family stigma creates feelings of guilt in professionals, especially the best ones, who defend themselves by denying and shifting attention to other less problematic illnesses. In this way training towards a change of perspective is still lacking . The training courses on the treatment of obesity are carried out with old clichés: not new ways of communicating with the patient, better professional-family-child/young relationships and creating awareness to contrast weight stigma, but rather concentration on breastfeeding, weaning and complications. Usual topics that have already proven to be unsuccessful, while we know that reducing stigma and guilt, although difficult, would improve the quality of life, even if not always weight, especially in the long term. Obesity, like all chronic diseases, is not healable. Reversing it, when it has become structured and, above all, has become serious, is almost impossible. If it is difficult to regain a “normal” weight, as with all chronic pathologies, with treatment patients may feel better, reduce cardio-metabolic risk, improve sociability and quality of life. It is necessary to re-educate professionals starting from the definition of obesity as a complex, progressive and relapsing chronic disease with many contributing social and environmental factors (Table ), so to reduce the sense of guilt of both patients and professionals, who must be satisfied with having improved the health of their patients, despite not being able to achieve complete healing. Motivational interviews and therapeutic education are approaches with good evidence of effectiveness, known to healthcare professionals, especially in primary care, but applying them for obesity is more difficult, requiring more attention, skill and empathy. The stigmatization of weight has made healthcare professionals unwittingly blaming and deriding, and patients, repeatedly offended, have become hypersensitive. To change care we must accept patients as equal co-authors in care projects, as already done in other countries for diseases, such as diabetes mellitus and obesity. Without this step, dealing more with obesity could have negative effects. Teaching primary school overweight children, the “mistakes” that occur daily in their families and the complications of excess weight can increase the sense of guilt. Focusing attention on food, weight and body image, favours low self-esteem and eating disorders, which are already frequent . The latest study by Rebecca Puhl’s group , who has been dealing with weight stigma for years, on thousands of questionnaires filled out by parents, teenagers and now also children from various countries around the world, has highlighted the role of stigma on both physical and psychological health, and its devastating effect, if started in childhood. We have discovered that in all families, where members meet mainly for shopping, preparing and eating meals, trying on and buying clothing, the discussion about weight is continuous, especially between parents and overweight children and it usually has negative and harmful tones. In order to improve this situation, a guide will soon be published on the website of the Italian Society of Paediatrics to help parents talk to their children and support them during treatment. Furthermore, the Emilia Romagna Region has developed the BeBa (Benessere Bambino, Childhood Wellbeing) app for families to promote healthy childhood initiatives . Today, paediatricians are asked to create guides to help parents give positive messages about weight and body to their children, or at least to avoid talking about it to protect their psychological health and allow the construction of a positive social identity. These guides are necessary for understanding what to do, instead of just talking about it without doing anything, since talking promotes shame and sadness. But how can paediatricians help parents if they themselves use negative messages? Instead of listening to them and helping them find personalized treatment paths, they load them with guilt and judgement. The list of tips for changing stigma in the healthcare sector is extensive, but still struggles to be adopted (Table ) . Guides for parents on how to talk about weight and what to do to reduce stigma and improve family lifestyle are necessary. Luckily not everything is stalled. Our project to collect questionnaires from parents of children and adolescents with overweight/obesity, started with a group of pediatricians from the Society of Paediatric Endocrinology and Diabetology, and welcomed by 17 Level II centers and 10 Family Paediatricians from the Campania Region, has terminated. Parents and children rated on a 5 Likert scale how motivating and offensive they considered 10 terms commonly used by professionals to talk about weight: weight, excessive weight, unhealthy weight, overweight, obesity, severe obesity, serious obesity, complicated obesity, fat, very fat . Questionnaires of 391 parents and 249 children (range 5–18 years; average age 11), filled out from June 2019 to February 2020, demonstrate that the term “unhealthy weight” is the most motivating and least offensive. The most offensive terms are: obesity and fat. There is no single term that is valid for everyone, so the only practical advice for healthcare professionals is to use the terminology used by the family or ask them. Terms like a bove normal weight, inadequate weight, very robust, unhealthy weight , were the most motivating. For years, some family pediatricians and specialists have tried to treat individual patients, but without a proper network, satisfactory and long-lasting results can rarely be achieved with such a disease. The network is essential! Today there is a swarm of small prevention projects that are not coordinated with each other, often sponsored by the food or marketing industry, carried out with small groups and limited timescales, which in the end will not clarify the questions that experts are asking today about how to contrast obesity. Currently in 11 regions, led by Emilia Romagna, a project is underway, supported by the National Center for Disease Prevention and Control (CCM) 2020 of the Ministry of Health, which re-evaluates the positive contents of past projects and seeks to reactivate them, improve them, put them online and spread them to other institutions among Primary Care, Hygiene and Nutrition Services and specialists. A multi-disciplinary and multi-component family-centered therapeutic project, based on the principles of therapeutic education, as supported by all most recent guidelines, has been active in Emilia Romagna for ten years. Family paediatricians, which are the strong point not only in the prevention, but above all in the treatment of obesity, adequately trained in their central role (Table ), are enthusiastic. However, it is necessary to seriously invest in the care of this disease, in order to improve the path, and offer intensive projects with teams trained on social determinants and early adverse events, as proposed by the AAP , without forgetting the transition to adult care. It will take time to see a change everywhere, but the most difficult first step, which is changing the narrative, does not require huge economic investments, but rather personal growth paths for professionals: with adequate training in the use of terms in healthcare to send messages of “RESPECT”. Thus, we can begin to improve the therapeutic approach and the outcome of treatment, reduce the internalization of weight stigma, prevalence of drop-out and change the future history of children with excess weight. The next step will be to spread the process everywhere, exploiting all possible resources in the network. In short, there still is hope! |
Artificial intelligence for response prediction and personalisation in radiation oncology | be67a268-d347-441b-a9e4-9a6975b3ac43 | 11839704 | Internal Medicine[mh] | Radiotherapy is one of the pillars of cancer treatment , using ionising particles or x‑rays to arrest development of the treated tumour, i.e. to establish tumour control. Tumour control tends to improve with increasing radiation dose, although clinical evidence paints a more complex picture . At the same time, radiation also negatively affects healthy tissues. These radiotherapy side effects range from minor to severe transient or chronic disabilities that negatively impact patient wellbeing. Therefore, radiotherapy is often a compromise between achieving good tumour control while limiting severe side effects. Currently, such considerations are based on observations in populations of treated patients with similar diagnoses, and result in a treatment plan with a standard radiation dose. However, both tumour control and healthy tissue response are known to be patient specific. Therefore, there is an ongoing effort to personalise radiotherapy . Sometimes single characteristics or biomarkers strongly affect tumour control or the healthy tissue response. For example, oropharyngeal cancers caused by human papillomavirus type 16 are more radiosensitive and easier to treat than oropharyngeal cancers of different origins . In other cases, there may not be a single biomarker or small set of biomarkers that determines or correlates with tumour control or the healthy tissue response. Instead, these are driven by a complex variety of characteristics. Artificial intelligence (AI) systems can aid in providing clinical insights based on complex sets of characteristics. In this article, we discuss the use of AI systems to predict radiotherapy response and discuss ongoing research and concepts regarding how such systems might be used for personalised radiotherapy. Prior to treatment, AI systems can personalise treatment by predicting the estimated response of the tumour and healthy tissues to radiotherapy. In essence, such systems extend existing tumour control probability (TCP) and normal tissue complication probability (NTCP) models. TCP models currently inform the prescribed standard fractionation and radiation dose, whereas NTCP models inform dose constraints for organs at risk. Within this context, AI systems seek to overcome limitations in personalisation due to how current models are formulated and answer one or more of the following questions: What dose should be applied to the tumour to achieve a high probability of tumour control? What dose may be present in organs at risk that still would lead to low probability of complications? What additional intervention or treatment modality may improve tumour control, and how should treatment be delivered? The NTCP-related question is the easiest to answer. AI systems integrate the planned dose in an organ at risk and additional patient-specific characteristics to estimate the probability of adverse radiation effects in healthy tissue. This enables estimation of the patient-specific NTCP . An example use case is the model-based approach for selecting patients to be treated with proton therapy based on the expected reduction in the likelihood of treatment side effects through the use of proton therapy . The involved NTCP models rely on a selection of dosimetric and demographic characteristics as well as accurate recording of clinician- and patient-reported treatment-related toxicities . In recent years, studies have investigated the use of additional data sources such as magnetic resonance imaging of organs at risk . Perhaps surprisingly, the TCP-related questions are harder to answer in research settings using patient data. Patients with a specific diagnosis receive a standard dose prescription that varies little, making it difficult to directly infer the role of dose in tumour control . Research therefore focuses mostly on development and assessment of AI systems that estimate the risk of locoregional tumour recurrence, disease progression, distant metastatic spread and death, and other, similar endpoints . Here, the hypothesis is that patients at a higher risk of adverse tumour-related outcomes may benefit from a change in or addition to the standard treatment, e.g. an escalated dose prescription or the use of specific drugs. Conversely, patients with a very low risk of adverse tumour-related outcomes but with likely treatment side effects may be eligible for a reduction in dose. Despite a large research interest, at the time of writing, very few randomised clinical trials are investigating the interventional use of AI systems for personalising radiotherapy . Pretreatment personalisation results in static treatment plans that are easy to integrate into current clinical workflows (Fig. ). However, many radiotherapy treatment regimens are fractionated. The tumour will change over the course of the treatment and thereafter. Conventional static plans use a series of safety margins to ensure the target receives its prescribed dose . However, the use of safety margins to ensure tumour coverage means healthy tissues are more likely to receive higher doses, thus increasing the risk of treatment toxicity. To balance the desire to minimise target margins with the need to treat the whole tumour, technological advances aim to precisely deliver radiation dose to the intended target and are increasingly taking anatomical changes during treatment into account in a process called adaptive radiotherapy . Such changes are typically observed by imaging. AI systems are expected to play an important role in this process, e.g. by enhancing low-dose imaging for detecting anatomical changes , through automated dose estimation and by measuring dose delivery among others. Adaptive radiotherapy makes treatment more precise, which may reduce normal tissue toxicities. Altering the actual radiation dose based on observed tumour and normal tissue responses is also starting to be explored. Conceptually, a differentiated tumour response is expected to be observable during fractionated radiation treatment because of differences in tumour radiosensitivity. Indeed, studies point to improved differentiation of treatment response based on changes observed in imaging obtained during treatment . Data that capture tumour response can then be used to support treatment decisions, e.g. early termination or a dose decrease for patients with tumours that respond well to treatment , thus paving the way for response-driven radiotherapy. In response-driven radiotherapy, AI systems help interpret complex longitudinal data such as imaging and provide treatment decision support. The logical conclusion of treatment personalisation is radiotherapy that is dynamically optimised throughout treatment. Dynamically optimised radiotherapy integrates both pretreatment data, data obtained during treatment and biophysical simulations, to create treatment plans that maximise TCP and minimise NTCP. Such plans, which are continuously adapted and reassessed during treatment, can become substantially different from the current single- or few-dose-level treatment plans. Dynamically optimised radiotherapy may introduce evolutionary principles in radiotherapy with the aim of reducing radiotherapy resistance . Due to the complexity of the procedure, dynamically optimised radiotherapy will likely be driven by AI systems. Patients who have undergone radiotherapy are regularly checked to assess treatment success, determine any adverse treatment effects, and to monitor tumour recurrence and progression. The need for additional treatment is also determined. AI systems may assist in all these processes and could potentially be used to provide early indications of potential progress, recurrence or metastasis. For example, the Response Evaluation Criteria in Solid Tumors (RECIST) is a widely used system for assessing treatment response during follow-up of patients with solid tumours . RECIST defines response categories of the treated lesion: complete response, partial response, progressive disease and stable disease. Determining treatment response using RECIST requires measuring lesion sizes, e.g. from imaging. Multiple studies have investigated using AI systems for RECIST scoring to save time and reduce interobserver variability . Pseudoprogression of lesions is sometimes observed after radiotherapy and immunotherapy . Pseudoprogression indicates an increase in lesion size that is transient and disappears during follow-up but would be considered progressive disease at the moment of measurement. Pseudoprogression that is interpreted as true progression may result in unnecessary treatment changes. AI systems may help to differentiate between pseudoprogression and true progression . AI systems require data. During development, data are used to train an AI system, i.e. it learns patterns in the data that are related to tumour control or the response of healthy tissues. A trained AI system can subsequently make predictions based on data from new patients. A large variety of data is potentially available for treatment personalisation (Fig. ). However, acquisition of any data has a cost, both in terms of personnel, equipment and consumables, as well as in terms of patient comfort and wellbeing. Realistically, the patient data available to AI systems are data that are obtained in clinical routine within a relevant timeframe. For example, biomarkers may be determined from a tumour biopsy for diagnostic purposes but are not well suited for response-driven radiotherapy due to both lab time and the need for repeated biopsies. When a relatively complete picture is available, multimodal patient data could, in the future, inform a digital twin to simulate treatment response through both statistical learning—AI techniques—and physics-driven modelling . Some data are commonly available and have been investigated for treatment personalisation. Clinical (e.g. tumour staging) and demographic data (e.g. age and relevant lifestyle choices) are almost always available within the constraints of patient privacy and data protection laws. For various cancer entities, biopsies and other tissue samples provide biomarkers for differential diagnosis . Likewise, diagnostic and treatment planning imaging are readily available. The radiotherapy treatment plan itself is also a data source, describing the expected dose delivered to the tumour and healthy tissues . In contrast to pretreatment data, intra-treatment data sources are considerably more limited, if available at all. Here, AI systems must mostly rely on the already delivered dose and imaging, such as low-dose computed tomography, cone-beam computed tomography or magnetic resonance imaging. Some institutions may collect clinician- or patient-reported outcome measures during treatment, but other data are not routinely obtained. Patient demographics and other patient-level data do not meaningfully change over the timescale of radiotherapy treatment. Biopsy-derived data are not available during treatment due to their invasiveness. Thus, aside from imaging and clinician- or patient-reported outcome measures, the only repeatable and acceptably invasive data derive from blood samples . The path to routine use of AI systems for personalising radiotherapy is not trivial . Below we discuss some of these challenges. Challenges for realising research concepts The main challenge in realisation of AI systems for personalising radiotherapy from a research perspective is related to data. Most research currently focuses on pretreatment personalisation of radiotherapy, simply because data are already available. Here, notable data sources are diagnostic and treatment planning imaging and clinical and demographic parameters. For more advanced concepts such as response-driven radiotherapy and dynamically optimised radiotherapy, data are more scarce or absent altogether. This makes assessing and realising these concepts difficult. Research into the response-driven radiotherapy concept is expected to benefit from an increased availability of imaging data during treatment due to an uptake in adaptive radiotherapy in clinical settings. An open question is whether the mostly anatomical information from computed tomography or conventional MRI provides sufficient information to realise this concept or whether functional information (e.g. on tumour oxygenation or immune involvement ) is required. One important aspect of data is heterogeneity. Sources of heterogeneity are, for example, variations in imaging equipment and protocols, variation in data annotation (e.g. tumour segmentation), variation in tissue preparation protocols, variation in data processing, variation in radiotoxicity assessment, etc. Heterogeneity not only applies to input data but also to endpoints. Clinician- or patient-reported outcome measures used as a reference standard for NTCP models are, to a degree, subjective . Likewise, TCP-related events (e.g. tumour recurrence, metastasis, etc.) are dated to the follow-up date at which they are observed but were also present—but unobserved—prior to that date. This results in some uncertainty regarding the exact timing of these events. Data sources that are too heterogeneous hamper the detection of meaningful and generalisable patterns by AI systems, basically drowning any signal in noise. Standardisation and harmonisation efforts seek to limit this heterogeneity . However, some degree of heterogeneity cannot be realistically avoided. AI systems need to generalise to new, unseen data to be clinically translatable. This means that AI systems should be trained with data that contains, or mimics, the heterogeneity encountered in clinical practice. The most straightforward way is by training an AI system on data collected from multiple centres, e.g. through federated learning or local, regional, national and international data repositories . Alternatively, or additionally, if heterogeneity can be characterised, new data can be synthesised or existing data augmented to mimic a heterogeneous dataset . Another issue is that data have an age, i.e. each patient dataset is a snapshot of the clinical practice at the moment it is acquired. Over the years, patient outcomes tend to improve through the availability of new treatment modalities (e.g. consolidation durvalumab in stage III non-small cell lung cancer ) and technological improvements . Likewise, data quality and availability tend to improve over time, for example through technological improvements in equipment (e.g. time-of-flight positron-emission tomography or long-read sequencing ) and cheaper availability. Data age is especially relevant in the context of TCP-related outcomes such as locoregional control and in late chronic toxicity-related outcomes that may require several years of patient follow-up to detect and collect. By the time a patient dataset becomes available for training AI systems, the dataset is several years old. This may cause a drift in performance of AI systems , where the performance of an AI system in new data is lower than that in older data that were used to train it. Data age cannot be prevented, but its effect can be reduced by collecting from multiple centres, as mentioned earlier. This not so much shortens the time to availability of data but rather decreases the timeframe required for collecting sufficient data to train an AI system as compared to collection by a single centre, thus allowing for the use of more recent data. Challenges to clinical translation Although the usefulness of treatment personalisation based on AI systems is clear, their clinical translation into the radiotherapy department has not been realised. Since models for treatment personalisation directly intervene in treatments, a high level of evidence for their clinical efficacy is required. Such evidence comes, e.g. from interventional randomised clinical trials . Since such trials are costly, an AI system must offer a clear expected benefit. Therefore, as a first step, proposed AI systems need to be externally validated. This process is not trivial. Validation requires that a dataset be available that is comparable to the training dataset. It requires that the protocol for preparing and processing these data is known and complete. Then it requires that the AI system itself is available or can be reproduced. Finally, it requires that the protocol for processing the output of the AI system is known and available, where this is relevant. Absence of anything from this process prevents external validation. Because of the reproducibility crisis in science, for which the underlying reasons have not been resolved , many proposed AI systems are expected to fail validation. Indeed, this was found in a study in patients with locally advanced rectal cancer where AI systems based on pretreatment imaging proposed in the literature for prognosticating tumour were validated . AI systems for personalised radiotherapy response may also be biassed. Biases result in AI systems that perform better in certain populations than in others and can lead to reinforcing existing structures of inequality . These are also risks for AI systems for response prediction and personalisation in radiation oncology. For one, access to radiotherapy is not equal across the globe , which means that data used to train these systems derives predominantly from populations with good access to radiotherapy, with modern equipment and a high quality of care. Even within those populations, access to care and treatment response may be determined by socioeconomic factors . To identify benefits and understand such risks, individual AI systems should undergo an impact assessment . Challenges to clinical implementation If an AI system is shown to be clinically meaningful and has received the required certification to operate in a clinical environment, it still needs to be successfully implemented in the clinical workflow . For a radiotherapy department inexperienced in managing AI systems, implementation requires considerable effort. Among other things, the IT infrastructure should be prepared to provide the AI system with its required data and to handle its output. Users should be trained in correct use of the AI system. The AI systems should be monitored and undergo quality assurance . Users should be prepared for occasions when the AI system is temporarily unavailable or no longer supported. Moreover, the use of an AI system may lead to loss of expert skill ( deskilling ) and overconfidence in the output of the AI system ( complacency ), which should be addressed . On the other hand, lack of confidence in an AI system may lead to underutilisation and thus diminished benefits. In the end, successful implementation of an AI system for treatment personalisation can only be assessed through lasting improved patient outcomes. To realise such outcomes, a radiotherapy department will need to learn how to successfully operate such systems. The main challenge in realisation of AI systems for personalising radiotherapy from a research perspective is related to data. Most research currently focuses on pretreatment personalisation of radiotherapy, simply because data are already available. Here, notable data sources are diagnostic and treatment planning imaging and clinical and demographic parameters. For more advanced concepts such as response-driven radiotherapy and dynamically optimised radiotherapy, data are more scarce or absent altogether. This makes assessing and realising these concepts difficult. Research into the response-driven radiotherapy concept is expected to benefit from an increased availability of imaging data during treatment due to an uptake in adaptive radiotherapy in clinical settings. An open question is whether the mostly anatomical information from computed tomography or conventional MRI provides sufficient information to realise this concept or whether functional information (e.g. on tumour oxygenation or immune involvement ) is required. One important aspect of data is heterogeneity. Sources of heterogeneity are, for example, variations in imaging equipment and protocols, variation in data annotation (e.g. tumour segmentation), variation in tissue preparation protocols, variation in data processing, variation in radiotoxicity assessment, etc. Heterogeneity not only applies to input data but also to endpoints. Clinician- or patient-reported outcome measures used as a reference standard for NTCP models are, to a degree, subjective . Likewise, TCP-related events (e.g. tumour recurrence, metastasis, etc.) are dated to the follow-up date at which they are observed but were also present—but unobserved—prior to that date. This results in some uncertainty regarding the exact timing of these events. Data sources that are too heterogeneous hamper the detection of meaningful and generalisable patterns by AI systems, basically drowning any signal in noise. Standardisation and harmonisation efforts seek to limit this heterogeneity . However, some degree of heterogeneity cannot be realistically avoided. AI systems need to generalise to new, unseen data to be clinically translatable. This means that AI systems should be trained with data that contains, or mimics, the heterogeneity encountered in clinical practice. The most straightforward way is by training an AI system on data collected from multiple centres, e.g. through federated learning or local, regional, national and international data repositories . Alternatively, or additionally, if heterogeneity can be characterised, new data can be synthesised or existing data augmented to mimic a heterogeneous dataset . Another issue is that data have an age, i.e. each patient dataset is a snapshot of the clinical practice at the moment it is acquired. Over the years, patient outcomes tend to improve through the availability of new treatment modalities (e.g. consolidation durvalumab in stage III non-small cell lung cancer ) and technological improvements . Likewise, data quality and availability tend to improve over time, for example through technological improvements in equipment (e.g. time-of-flight positron-emission tomography or long-read sequencing ) and cheaper availability. Data age is especially relevant in the context of TCP-related outcomes such as locoregional control and in late chronic toxicity-related outcomes that may require several years of patient follow-up to detect and collect. By the time a patient dataset becomes available for training AI systems, the dataset is several years old. This may cause a drift in performance of AI systems , where the performance of an AI system in new data is lower than that in older data that were used to train it. Data age cannot be prevented, but its effect can be reduced by collecting from multiple centres, as mentioned earlier. This not so much shortens the time to availability of data but rather decreases the timeframe required for collecting sufficient data to train an AI system as compared to collection by a single centre, thus allowing for the use of more recent data. Although the usefulness of treatment personalisation based on AI systems is clear, their clinical translation into the radiotherapy department has not been realised. Since models for treatment personalisation directly intervene in treatments, a high level of evidence for their clinical efficacy is required. Such evidence comes, e.g. from interventional randomised clinical trials . Since such trials are costly, an AI system must offer a clear expected benefit. Therefore, as a first step, proposed AI systems need to be externally validated. This process is not trivial. Validation requires that a dataset be available that is comparable to the training dataset. It requires that the protocol for preparing and processing these data is known and complete. Then it requires that the AI system itself is available or can be reproduced. Finally, it requires that the protocol for processing the output of the AI system is known and available, where this is relevant. Absence of anything from this process prevents external validation. Because of the reproducibility crisis in science, for which the underlying reasons have not been resolved , many proposed AI systems are expected to fail validation. Indeed, this was found in a study in patients with locally advanced rectal cancer where AI systems based on pretreatment imaging proposed in the literature for prognosticating tumour were validated . AI systems for personalised radiotherapy response may also be biassed. Biases result in AI systems that perform better in certain populations than in others and can lead to reinforcing existing structures of inequality . These are also risks for AI systems for response prediction and personalisation in radiation oncology. For one, access to radiotherapy is not equal across the globe , which means that data used to train these systems derives predominantly from populations with good access to radiotherapy, with modern equipment and a high quality of care. Even within those populations, access to care and treatment response may be determined by socioeconomic factors . To identify benefits and understand such risks, individual AI systems should undergo an impact assessment . If an AI system is shown to be clinically meaningful and has received the required certification to operate in a clinical environment, it still needs to be successfully implemented in the clinical workflow . For a radiotherapy department inexperienced in managing AI systems, implementation requires considerable effort. Among other things, the IT infrastructure should be prepared to provide the AI system with its required data and to handle its output. Users should be trained in correct use of the AI system. The AI systems should be monitored and undergo quality assurance . Users should be prepared for occasions when the AI system is temporarily unavailable or no longer supported. Moreover, the use of an AI system may lead to loss of expert skill ( deskilling ) and overconfidence in the output of the AI system ( complacency ), which should be addressed . On the other hand, lack of confidence in an AI system may lead to underutilisation and thus diminished benefits. In the end, successful implementation of an AI system for treatment personalisation can only be assessed through lasting improved patient outcomes. To realise such outcomes, a radiotherapy department will need to learn how to successfully operate such systems. AI systems may help to personalise radiotherapy treatment by prognosticating the tumour and normal tissue response to radiation based on multifaceted and complex patient data. If challenges in translating and implementing AI systems can be overcome, we expect that the first AI systems for personalising radiotherapy will use pretreatment data to adapt the prescribed dose. These systems will then likely be improved upon by integrating the observed tumour response before full dynamic optimisation of treatment plans becomes a reality. |
Efficacy of Trastuzumab Deruxtecan in HER2-Expressing Solid Tumors by Enrollment HER2 IHC Status: Post Hoc Analysis of DESTINY-PanTumor02 | 768cddad-ba79-4358-bb35-2b4afa7b6663 | 11480158 | Anatomy[mh] | Trastuzumab deruxtecan (T-DXd) is an antibody–drug conjugate comprising a humanized immunoglobulin G1 monoclonal antibody specifically targeting human epidermal growth factor receptor 2 (HER2), a tetrapeptide-based cleavable linker, and a potent topoisomerase I inhibitor payload . T-DXd is approved in multiple countries worldwide for various indications, including HER2-positive and HER2-low breast cancer, HER2-positive gastric or gastroesophageal junction adenocarcinoma, and HER2-mutant non-small cell lung cancer (NSCLC) . In April 2024, based in part on primary results from the DESTINY-PanTumor02 trial, T-DXd was granted accelerated approval in the USA for adult patients with unresectable or metastatic HER2-positive [immunohistochemistry (IHC) 3+] solid tumors that have progressed after prior treatment and have no satisfactory alternative therapy . In the open-label phase 2 DESTINY-PanTumor02 trial, T-DXd demonstrated clinically meaningful antitumor activity in pretreated patients with HER2-expressing solid tumors . Subgroup analyses by HER2 status were previously reported by central HER2 IHC testing, with the greatest benefit reported in patients whose tumors had HER2 IHC 3+ expression . HER2 expression for study enrollment was based on local or central IHC test result and, reflective of HER2 testing methods used in clinical practice , the majority of patients were enrolled based on results from local HER2 IHC testing ( n = 202; 75.7%) . Here, we report a post hoc efficacy analysis of T-DXd in DESTINY-PanTumor02 according to the local or central HER2 IHC test result used for enrollment. Study Design and Participants DESTINY-PanTumor02 (NCT04482309) was an open-label, phase 2 study evaluating T-DXd (5.4 mg/kg once every 3 weeks) for HER2-expressing locally advanced or metastatic disease after ≥ 1 systemic treatment or without alternative treatments. Study design details and outcome measures have been previously published . Briefly, eligible patients were aged ≥ 18 years with histologically confirmed locally advanced, unresectable, or metastatic biliary tract, bladder, cervical, endometrial, ovarian, pancreatic, or other solid cancers (excluding NSCLC, breast, gastric, and colorectal cancers) that had progressed following prior treatment or with no satisfactory alternative treatment options. HER2 expression for enrollment was based on a local IHC test result, where available; otherwise, enrollment was determined via a central IHC test result using the HER2 HercepTest™ (Dako). HER2 IHC scoring was based on current American Society of Clinical Oncology/College of American Pathologists guidelines for scoring HER2 for gastric cancer (in situ hybridization testing not required) . Patients who were enrolled based on a local test result also had HER2 expression determined by retrospective central testing using the HER2 HercepTest™ (Dako) and scored according to gastric-specific criteria . Procedures T-DXd was administered intravenously once every 3 weeks at 5.4 mg/kg until documented disease progression [Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST 1.1)], withdrawal of consent, or if any other discontinuation criteria were met. Endpoints The primary endpoint was confirmed objective response rate (ORR) by investigator assessment; secondary endpoints included safety, duration of response (DOR), progression-free survival (PFS), and overall survival (OS). An independent central review (ICR) per RECIST 1.1 was also conducted to support the investigator-assessed results for secondary outcomes. Exploratory endpoints included subgroup analysis by HER2 status. Secondary safety endpoints included occurrence of adverse events (AEs), including AEs of special interest [interstitial lung disease (ILD)/pneumonitis and left ventricular dysfunction]. Ethics All patients provided written informed consent. The study was approved by independent institutional review boards of each participating site and was conducted in accordance with the ethics principles of the Declaration of Helsinki and with Good Clinical Practice guidelines defined by the International Conference on Harmonisation. A list of these individual review boards has been provided as a supplementary appendix. DESTINY-PanTumor02 (NCT04482309) was an open-label, phase 2 study evaluating T-DXd (5.4 mg/kg once every 3 weeks) for HER2-expressing locally advanced or metastatic disease after ≥ 1 systemic treatment or without alternative treatments. Study design details and outcome measures have been previously published . Briefly, eligible patients were aged ≥ 18 years with histologically confirmed locally advanced, unresectable, or metastatic biliary tract, bladder, cervical, endometrial, ovarian, pancreatic, or other solid cancers (excluding NSCLC, breast, gastric, and colorectal cancers) that had progressed following prior treatment or with no satisfactory alternative treatment options. HER2 expression for enrollment was based on a local IHC test result, where available; otherwise, enrollment was determined via a central IHC test result using the HER2 HercepTest™ (Dako). HER2 IHC scoring was based on current American Society of Clinical Oncology/College of American Pathologists guidelines for scoring HER2 for gastric cancer (in situ hybridization testing not required) . Patients who were enrolled based on a local test result also had HER2 expression determined by retrospective central testing using the HER2 HercepTest™ (Dako) and scored according to gastric-specific criteria . T-DXd was administered intravenously once every 3 weeks at 5.4 mg/kg until documented disease progression [Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST 1.1)], withdrawal of consent, or if any other discontinuation criteria were met. The primary endpoint was confirmed objective response rate (ORR) by investigator assessment; secondary endpoints included safety, duration of response (DOR), progression-free survival (PFS), and overall survival (OS). An independent central review (ICR) per RECIST 1.1 was also conducted to support the investigator-assessed results for secondary outcomes. Exploratory endpoints included subgroup analysis by HER2 status. Secondary safety endpoints included occurrence of adverse events (AEs), including AEs of special interest [interstitial lung disease (ILD)/pneumonitis and left ventricular dysfunction]. All patients provided written informed consent. The study was approved by independent institutional review boards of each participating site and was conducted in accordance with the ethics principles of the Declaration of Helsinki and with Good Clinical Practice guidelines defined by the International Conference on Harmonisation. A list of these individual review boards has been provided as a supplementary appendix. As reported previously, 268 patients with HER2-expressing solid tumors were enrolled between October 7, 2020, and July 7, 2022 ; 267 patients (99.6%) received ≥ 1 dose of T-DXd and were included in the full analysis set. In total, 202 (75.7%) and 65 (24.3%) patients were enrolled based on local and central HER2 IHC test results, respectively. Per the local or central HER2 IHC test result used for study enrollment, 111 (41.6%) and 151 (56.6%) patients with IHC 3+ and IHC 2+ tumors were enrolled, respectively; 5 patients with IHC 1+ tumors were included following a protocol-specified interim analysis. Baseline demographics and clinical characteristics in the full study population, and in patients with IHC 3+ and IHC 2+ tumors according to the local or central HER2 IHC test result used for study enrollment, are summarized in Table . The median (range) duration of follow-up was 16.0 months (0.4–31.6) and 11.7 months (0.7 to 31.1) in patients with IHC 3+ and IHC 2+ tumors, respectively. Efficacy Investigator-assessed ORR and DOR by HER2 IHC status used to determine study enrollment and by tumor cohort are reported in Fig. . In patients with IHC 3+ tumors, investigator-assessed confirmed ORR was 51.4% [95% confidence interval (CI) 41.7, 61.0], and median DOR was 14.2 months (95% CI 10.3, 23.6). In patients with IHC 2+ tumors, investigator-assessed ORR was 26.5% (95% CI 19.6, 34.3), and median DOR was 9.8 months (95% CI 4.5, 12.6). ORR and DOR results by ICR are also presented in Fig. . Investigator-assessed disease control rate at 12 weeks was 78.4% (95% CI 69.6, 85.6) in patients with IHC 3+ tumors and 60.3% (95% CI 52.0, 68.1) in those with IHC 2+ tumors. PFS (by investigator assessment and ICR) and OS by tumor cohort and HER2 IHC status used to determine enrollment are reported in Table and Supplementary Material Fig. . All 5 patients enrolled with HER2 IHC 1+ tumors were in the cervical cancer cohort; 2 were enrolled based on local test results, and 3 were enrolled based on central test results. Two patients (40.0%; 95% CI 5.3, 85.3) had a confirmed partial response (by both investigator assessment and ICR). Safety Detailed safety outcomes have been reported previously . Among the 267 treated patients (median follow-up of 12.75 months), 226 patients (84.6%) had ≥ 1 investigator-assessed drug-related AE; the most common drug-related AEs were nausea (55.1%), anemia (27.7%), diarrhea (25.8%), vomiting (24.7%), and fatigue (24.7%). Adjudicated drug-related events of ILD/pneumonitis occurred in 28 patients [10.5%; grade 1, n = 7 (2.6%); grade 2, n = 17 (6.4%); grade 3, n = 1 (0.4%)]; there were 3 (1.1%) fatal adjudicated drug-related cases of ILD/pneumonitis that occurred in the biliary tract, endometrial, and other tumor cohorts. Overall, no new safety signals were reported for T-DXd. Investigator-assessed ORR and DOR by HER2 IHC status used to determine study enrollment and by tumor cohort are reported in Fig. . In patients with IHC 3+ tumors, investigator-assessed confirmed ORR was 51.4% [95% confidence interval (CI) 41.7, 61.0], and median DOR was 14.2 months (95% CI 10.3, 23.6). In patients with IHC 2+ tumors, investigator-assessed ORR was 26.5% (95% CI 19.6, 34.3), and median DOR was 9.8 months (95% CI 4.5, 12.6). ORR and DOR results by ICR are also presented in Fig. . Investigator-assessed disease control rate at 12 weeks was 78.4% (95% CI 69.6, 85.6) in patients with IHC 3+ tumors and 60.3% (95% CI 52.0, 68.1) in those with IHC 2+ tumors. PFS (by investigator assessment and ICR) and OS by tumor cohort and HER2 IHC status used to determine enrollment are reported in Table and Supplementary Material Fig. . All 5 patients enrolled with HER2 IHC 1+ tumors were in the cervical cancer cohort; 2 were enrolled based on local test results, and 3 were enrolled based on central test results. Two patients (40.0%; 95% CI 5.3, 85.3) had a confirmed partial response (by both investigator assessment and ICR). Detailed safety outcomes have been reported previously . Among the 267 treated patients (median follow-up of 12.75 months), 226 patients (84.6%) had ≥ 1 investigator-assessed drug-related AE; the most common drug-related AEs were nausea (55.1%), anemia (27.7%), diarrhea (25.8%), vomiting (24.7%), and fatigue (24.7%). Adjudicated drug-related events of ILD/pneumonitis occurred in 28 patients [10.5%; grade 1, n = 7 (2.6%); grade 2, n = 17 (6.4%); grade 3, n = 1 (0.4%)]; there were 3 (1.1%) fatal adjudicated drug-related cases of ILD/pneumonitis that occurred in the biliary tract, endometrial, and other tumor cohorts. Overall, no new safety signals were reported for T-DXd. DESTINY-PanTumor02 enrolled patients with HER2-expressing (IHC 3+/2+) solid tumors, as determined by local or central HER2 IHC test results, with 75.7% enrolled based on local HER2 IHC results. In real-world clinical practice, HER2 IHC testing for breast cancer, colorectal cancer, and endometrial cancers is frequently conducted via local laboratories . As such, demonstrating that T-DXd antitumor activity is observed, irrespective of whether HER2 expression is identified by local or central IHC testing, is important to clinicians who are considering T-DXd as a therapeutic option for their patients following HER2 testing. In this post hoc analysis, T-DXd showed durable and clinically meaningful benefit in patients with HER2 IHC 3+ and IHC 2+ solid tumors per local or central HER2 IHC test results for study enrollment; efficacy results according to ICR were generally consistent with investigator-assessed outcomes. The highest response rate and longest DOR were seen in patients with IHC 3+ tumors. Overall, these IHC 3+ and IHC 2+ subgroup data by study enrollment HER2 IHC test results are comparable with the ORR and median DOR subgroup analyses previously reported according to HER2 IHC central test results using the HercepTest™ (Dako; investigator-assessed ORR of 61.3% and 27.2% and median DOR of 22.1 months and 9.8 months in patients with IHC 3+ and IHC 2+ tumors, respectively) . Favorable antitumor activity was also observed across a broad range of tumor types with IHC 3+ and IHC 2+ expression, similar to that previously shown by HER2 IHC central test results . As the majority of patients were enrolled based on local HER2 IHC testing, this analysis supports use of local IHC test results to identify patients whose tumors have HER2 IHC 3+ expression and are likely to respond to T-DXd; the magnitude of T-DXd clinical benefit is irrespective of central IHC confirmation . Considering the recent accelerated approval of T-DXd in the USA , it is important that appropriately validated HER2 tests are used at local laboratories and that pathologists are appropriately trained to evaluate and score solid tumor samples. Across studies of solid tumors, varying prevalence of HER2 IHC 3+/IHC 2+ expression has been observed, ranging from 16 to 33% of biliary tract cancers, 9–56% of urothelial carcinomas, 21–29% of cervical cancers, 18–56% of endometrial cancers, 4–28% of ovarian cancers, and 7–16% of pancreatic cancers . Patients with HER2-expressing solid tumors typically have an inferior prognosis , and there remains a high unmet clinical need for efficacious treatment options. When considering outcomes associated with current standard of care for tumor types included in this study , the magnitude of clinical benefit observed in DESTINY-PanTumor02 supports T-DXd as a new therapeutic option for pretreated patients with a range of HER2 IHC 3+ solid tumors. As previously reported, the safety findings in this trial are consistent with the known profile of T-DXd . ILD/pneumonitis remains an important identified risk, and proactive monitoring, early detection, and active management are critical to prevent high-grade ILD/pneumonitis. Limitations of the study include the single-arm design not enabling the inclusion of comparators owing to the range of tumor cohorts investigated and the small numbers of patients, reflecting the low prevalence of IHC 3+ and IHC 2+ expression in some tumor types. This post hoc analysis affirms the tumor-agnostic activity of T-DXd in patients with HER2 IHC 3+ and IHC 2+ solid tumors when HER2 expression is determined primarily by locally available IHC assessment, which is reflective of HER2 testing practices in clinical practice; overall benefit is consistent whether HER2 testing is conducted by local or central testing. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 476 KB) |
Catch me if you can: SARS-CoV-2 detection in brains of deceased patients with COVID-19 | 4c711462-27ac-4057-857c-6bf45c053ba9 | 7535625 | Pathology[mh] | |
Efficacy and Safety of Stellate Ganglion Block for Treating Angina Pectoris: A Systematic Review and Meta-Analysis | c673248f-7279-4650-8aaa-e4b0f8e17455 | 11842144 | Surgical Procedures, Operative[mh] | Angina pectoris (AP), which is one of the common clinical manifestations of coronary heart disease, is a clinical syndrome characterized by recurrent chest pain or discomfort caused by rapid and temporary ischemia and myocardial hypoxia . AP usually presents as a squeezing pain, stuffiness, or a pulling sensation in the chest. However, some patients may not exhibit chest symptoms but instead experience pain or discomfort in the lower jaw, scapula, shoulder, or fingers, while a few cases are mainly characterized by nonspecific symptoms such as fatigue or nausea. AP can be induced or aggravated by emotional excitement, strenuous exercise, cold exposure, and other stimuli, with rest or sublingual ingestion of nitroglycerin providing relief from this pain . AP severely affects left heart function and is considered a high-risk factor for acute myocardial infarction (AMI), heart failure, and sudden cardiac death. Moreover, AP prevalence in middle-aged people in Sweden was estimated to be 3.5% , whereas its prevalence was 4.69% and 7.02% in older men and women, respectively, in India . Previous studies have demonstrated that AP prevalence is positively correlated with age, with nearly half of all cases occurring in the older population (> 65 years) . AP is also a precursor to coronary artery disease and commonly co-occurs with diabetes, hypertension, congestive heart failure, and peripheral vascular disease, which can result in further health problems . Additionally, long-term pain in patients with AP can easily lead to anxiety and depression that worsen their mental health and substantially reduce their quality of life . Some patients even succumb to AMI or heart failure. All these severe consequences exert a heavy economic burden on the families of the affected individuals and medical systems . The current clinical treatment guidelines for AP primarily focus on drug control, with the preferred antianginal drugs consisting of short-acting nitrates, β -receptor antagonists, and calcium channel blockers . However, these drugs have a limited analgesic effect and fail to meet the clinical analgesic needs, particularly in unstable angina pectoris (UAP) and variant angina. Therefore, studies exploring new and effective therapies for managing AP are urgently required. The stellate ganglion (SG), also known as the cervicothoracic ganglion, is part of the cervical sympathetic chain and displays a high degree of variability in its morphology. The SG, which derives its name from its irregular star shape, presents as a fusion of the subcervical ganglion and the first thoracic ganglion anterior to the neck of the first rib in approximately 80% of individuals . In terms of its anatomical location, the SG is located within the vertebral artery triangle and bordered externally by the scalene muscle, internally by the longus colli, trachea, and esophagus, and posteriorly by the C7 transverse process and the prevertebral fascia, with the subclavian artery passing below . Stellate ganglion block (SGB) is a treatment method involving the injection of a local anesthetic into the SG to selectively block the sympathetic nerves innervating the ipsilateral head, neck, chest, and upper limb regions . In 2005, an international paper was published on the effectiveness of SGB in treating chronic refractory angina, leading to a rapid wave of research on this approach . Subsequently, SGB has been gradually introduced into clinical practice. In recent years, evidence-based medicine has also demonstrated the benefits of drug injection therapy for circulatory disorders . SGB is now widely used for managing cardiovascular diseases, immune disorders, endocrine diseases, and various pain syndromes . Prior researchers have shown that the cardiovascular effects mediated by SGB are strongly associated with the inhibition of sympathetic nerve activity, improvement of cardiac blood supply, and attenuation of the cardiac stress response . Furthermore, SGB offers remarkable advantages such as high operability, strong pertinence, low levels of patient pain, and reliable curative efficacy. However, the efficacy and safety of SGB in AP treatment are still not corroborated by evidence-based medicine, and data supporting its standardized use are lacking. Therefore, this systematic review and meta-analysis comprehensively collated and analyzed the published randomized controlled trials (RCTs) on SGB treatment for AP. Our aim was to objectively assess the efficacy and safety of SGB in AP treatment to provide a data-driven basis for the clinical application of SGB in patients with AP. This systematic review and meta-analysis followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement . This systematic review was prospectively registered on PROSPERO (registration number: CRD42023485135 ). 2.1. Inclusion Criteria 2.1.1. Types of Studies Clinical RCT studies with no restriction on the publication language. 2.1.2. Types of Patients In the case of patients with AP, a definite clinical diagnosis of AP was established according to the diagnostic guidelines in the National Institute for Health and Clinical Excellence (NICE) guidelines or the report of the International Society and Federation of Cardiology/World Health Organization on AP . Patient inclusion was not restricted by age, sex, region, race, or AP type, whereas those with congenital heart disease, rheumatic heart disease, malignant tumors, and mental illness were excluded. 2.1.3. Types of Interventions The experimental group received unilateral or bilateral SGB alone or combined with other therapies. In contrast, the control group did not undergo SGB and was primarily administered conventional medical treatment. Moreover, in studies where the experimental group was treated with SGB combined with other therapies, the control and experimental groups only differed in the use or nonuse of SGB, while the remaining conditions were consistent. 2.2. Exclusion Criteria Studies were excluded based on the following exclusion criteria: 1. Studies involving non-RCTs, cohort studies, animal experiments, cellular experiments, case reports, conference abstracts, research protocols, and reviews. 2. Literature unrelated to SGB or AP and lacking angina-related efficacy indicators. 3. Literature incompatible with the purpose of the current review and meta-analysis. 4. Literature without full-text availability or with prominent data gaps that cannot be filled. 5. Duplicate publication of literature data. 2.3. Outcome Indicators Primary outcomes were as follows: (1) AP symptoms (frequency, duration, and pain intensity of AP), (2) electrocardiogram (ECG) findings (including heart rate (HR), the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment, the detection rate of abnormal T waves after 24 h of treatment, and S-T segment displacement after treatment), (3) level of serum myocardial enzymes (i.e., Cardiac Troponin I (cTnI)), and (4) clinical efficacy. Secondary outcomes included the following: (1) the levels of vital sign parameters (i.e., mean arterial pressure (MAP) and blood oxygen saturation (SpO 2 )), (2) disease improvement (the transition rate from UAP to stable angina pectoris (SAP)), and (3) postoperative cardiovascular- and cerebrovascular-related adverse events (including the incidence of AMI, interventional therapy, stroke, rehospitalization, and death). 2.4. Data Retrieval We attempted to identify all clinical RCT studies on SGB treatment for AP by comprehensively searching PubMed, Embase, Cochrane Library, Web of Science, Chinese National Knowledge Infrastructure (CNKI), China Science and Technology Journal Database (VIP), and Wanfang databases. The retrieval period for eligible literature was from the date of database establishment to October 10, 2024. Simultaneously, we supplemented the search by examining the Chinese Clinical Trial Registry and ClinicalTrials.gov ( https://clinicaltrials.gov/ ), along with manually searching the reference list of the included studies and related reviews to obtain additional eligible literature. The search terms were a combination of subject-specific words with free-text words, including “angina pectoris,” “coronary artery disease,” “precordial pain,” “stellate ganglion bloc ∗ ,” and “cervicothoracic ganglion.” Moreover, the search query was adjusted appropriately based on the characteristics of each database. presents the search strategy employed for data retrieval from the PubMed database. 2.5. Literature Screening and Data Extraction All literature was managed using Zotero software. Two researchers (Y.Z. and J.H.) independently screened the literature according to the established literature inclusion criteria and cross-checked the results. Two researchers (Y.Z. and J.H.) independently extracted information from the included literature into Microsoft Excel 2016 and cross-checked the data, mainly consisting of the first author's name, publication year, region, study type, sample size, age, interventions, outcome indicators, and follow-up time. A third researcher (J.Y.) resolved any differences that arose during these steps. 2.6. Bias Risk and Quality Assessment The bias risk in the included studies was evaluated using Review Manager 5.4 software provided by the Cochrane Collaboration. This evaluation was based on the following seven items: random sequence generation, allocation concealment, the blinding of investigators and participants, the blinding of outcome evaluators, incomplete outcome data, selective reporting, and other biases. Two researchers (Y.L. and L.Y.) independently rated each of these items in each article as “low risk,” “unclear,” or “high risk” and cross-validated them. Any disagreements in the ratings were solved by a third researcher (J.Y.). GRADEpro GDT software was employed to assess the evidence quality of the outcome indicators. The evidence quality was rated as “high,” “moderate,” “low,” or “very low” according to their ratings in terms of five aspects, that is, the limitation of the study design, imprecision, inconsistency, indirectness, and publication bias . 2.7. Data Analysis Statistical analysis was performed using Review Manager 5.4 software. Different effect sizes were selected according to the data type. Specifically, relative risk (RR) and 95% confidence interval (CI) were employed for binary variables, whereas mean difference (MD) and 95% CI were calculated for continuous variables. The effect model was selected based on the magnitude of heterogeneity, which was estimated by the χ 2 and I 2 tests. Heterogeneity was considered significant at p = 0.1 and I 2 = 50%. In particular, the fixed-effects model (FEM) was chosen for the meta-analysis when heterogeneity was low ( p > 0.1, I 2 < 50%), while the random-effects model (REM) was utilized in the case of high heterogeneity. Subgroup and sensitivity analyses were applied to explore potential sources of heterogeneity. Additionally, descriptive analysis was performed instead of the meta-analysis when heterogeneity was significant. Finally, p < 0.05 was considered statistically significant. 2.1.1. Types of Studies Clinical RCT studies with no restriction on the publication language. 2.1.2. Types of Patients In the case of patients with AP, a definite clinical diagnosis of AP was established according to the diagnostic guidelines in the National Institute for Health and Clinical Excellence (NICE) guidelines or the report of the International Society and Federation of Cardiology/World Health Organization on AP . Patient inclusion was not restricted by age, sex, region, race, or AP type, whereas those with congenital heart disease, rheumatic heart disease, malignant tumors, and mental illness were excluded. 2.1.3. Types of Interventions The experimental group received unilateral or bilateral SGB alone or combined with other therapies. In contrast, the control group did not undergo SGB and was primarily administered conventional medical treatment. Moreover, in studies where the experimental group was treated with SGB combined with other therapies, the control and experimental groups only differed in the use or nonuse of SGB, while the remaining conditions were consistent. Clinical RCT studies with no restriction on the publication language. In the case of patients with AP, a definite clinical diagnosis of AP was established according to the diagnostic guidelines in the National Institute for Health and Clinical Excellence (NICE) guidelines or the report of the International Society and Federation of Cardiology/World Health Organization on AP . Patient inclusion was not restricted by age, sex, region, race, or AP type, whereas those with congenital heart disease, rheumatic heart disease, malignant tumors, and mental illness were excluded. The experimental group received unilateral or bilateral SGB alone or combined with other therapies. In contrast, the control group did not undergo SGB and was primarily administered conventional medical treatment. Moreover, in studies where the experimental group was treated with SGB combined with other therapies, the control and experimental groups only differed in the use or nonuse of SGB, while the remaining conditions were consistent. Studies were excluded based on the following exclusion criteria: 1. Studies involving non-RCTs, cohort studies, animal experiments, cellular experiments, case reports, conference abstracts, research protocols, and reviews. 2. Literature unrelated to SGB or AP and lacking angina-related efficacy indicators. 3. Literature incompatible with the purpose of the current review and meta-analysis. 4. Literature without full-text availability or with prominent data gaps that cannot be filled. 5. Duplicate publication of literature data. Primary outcomes were as follows: (1) AP symptoms (frequency, duration, and pain intensity of AP), (2) electrocardiogram (ECG) findings (including heart rate (HR), the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment, the detection rate of abnormal T waves after 24 h of treatment, and S-T segment displacement after treatment), (3) level of serum myocardial enzymes (i.e., Cardiac Troponin I (cTnI)), and (4) clinical efficacy. Secondary outcomes included the following: (1) the levels of vital sign parameters (i.e., mean arterial pressure (MAP) and blood oxygen saturation (SpO 2 )), (2) disease improvement (the transition rate from UAP to stable angina pectoris (SAP)), and (3) postoperative cardiovascular- and cerebrovascular-related adverse events (including the incidence of AMI, interventional therapy, stroke, rehospitalization, and death). We attempted to identify all clinical RCT studies on SGB treatment for AP by comprehensively searching PubMed, Embase, Cochrane Library, Web of Science, Chinese National Knowledge Infrastructure (CNKI), China Science and Technology Journal Database (VIP), and Wanfang databases. The retrieval period for eligible literature was from the date of database establishment to October 10, 2024. Simultaneously, we supplemented the search by examining the Chinese Clinical Trial Registry and ClinicalTrials.gov ( https://clinicaltrials.gov/ ), along with manually searching the reference list of the included studies and related reviews to obtain additional eligible literature. The search terms were a combination of subject-specific words with free-text words, including “angina pectoris,” “coronary artery disease,” “precordial pain,” “stellate ganglion bloc ∗ ,” and “cervicothoracic ganglion.” Moreover, the search query was adjusted appropriately based on the characteristics of each database. presents the search strategy employed for data retrieval from the PubMed database. All literature was managed using Zotero software. Two researchers (Y.Z. and J.H.) independently screened the literature according to the established literature inclusion criteria and cross-checked the results. Two researchers (Y.Z. and J.H.) independently extracted information from the included literature into Microsoft Excel 2016 and cross-checked the data, mainly consisting of the first author's name, publication year, region, study type, sample size, age, interventions, outcome indicators, and follow-up time. A third researcher (J.Y.) resolved any differences that arose during these steps. The bias risk in the included studies was evaluated using Review Manager 5.4 software provided by the Cochrane Collaboration. This evaluation was based on the following seven items: random sequence generation, allocation concealment, the blinding of investigators and participants, the blinding of outcome evaluators, incomplete outcome data, selective reporting, and other biases. Two researchers (Y.L. and L.Y.) independently rated each of these items in each article as “low risk,” “unclear,” or “high risk” and cross-validated them. Any disagreements in the ratings were solved by a third researcher (J.Y.). GRADEpro GDT software was employed to assess the evidence quality of the outcome indicators. The evidence quality was rated as “high,” “moderate,” “low,” or “very low” according to their ratings in terms of five aspects, that is, the limitation of the study design, imprecision, inconsistency, indirectness, and publication bias . Statistical analysis was performed using Review Manager 5.4 software. Different effect sizes were selected according to the data type. Specifically, relative risk (RR) and 95% confidence interval (CI) were employed for binary variables, whereas mean difference (MD) and 95% CI were calculated for continuous variables. The effect model was selected based on the magnitude of heterogeneity, which was estimated by the χ 2 and I 2 tests. Heterogeneity was considered significant at p = 0.1 and I 2 = 50%. In particular, the fixed-effects model (FEM) was chosen for the meta-analysis when heterogeneity was low ( p > 0.1, I 2 < 50%), while the random-effects model (REM) was utilized in the case of high heterogeneity. Subgroup and sensitivity analyses were applied to explore potential sources of heterogeneity. Additionally, descriptive analysis was performed instead of the meta-analysis when heterogeneity was significant. Finally, p < 0.05 was considered statistically significant. 3.1. Literature Screening Process and Outcomes According to the established search strategy, we retrieved 804 literature records, with no additional records. After deleting the duplicates, 516 articles were retained. Among them, 491 were excluded after reviewing the titles and abstracts, leaving 25 for further screening. The 491 articles that were excluded after applying our inclusion and exclusion criteria included six case reports, 22 animal experiments, 143 unrelated to AP, 277 unrelated to SGB, two conference abstracts, one scientific and technological achievement, one study protocol, one cell experiment, one news report, 32 reviews, and five meta-analyses. After further reading the full text of the remaining 25 articles, six non-RCT study designs, two articles with incomplete data, one article without full-text availability, six articles with inconsistent research purposes, three reviews, and one duplicate publication were excluded. Ultimately, six articles were included in the meta-analysis . The literature screening process is illustrated in . 3.2. Characteristics of Included Studies The six included studies were all conducted in China and publicly published in Chinese databases in the years 1996–2011. A total of 373 patients with AP and in the age range of 39–81 years were included. Furthermore, > 60% (230) of the patients had UAP, 65 had drug-resistant AP, and 78 had unspecified AP subtypes. The experimental group comprised 193 patients who were treated with SGB alone, SGB combined with drugs, or SGB combined with conventional medical treatment. In the control group, 180 patients received conventional medical treatment or a single-drug intervention. Additionally, no significant differences were observed between the baseline characteristics across all studies, indicating comparable basic information. The characteristics of the included studies are presented in . Finally, one study had clearly misplaced headers and content in two outcome data tables. Hence, we interchanged them appropriately and performed sensitivity analyses to reduce publication bias. 3.3. Methodological Quality in Included Studies The Cochrane risk of bias assessment tool was utilized to assess the bias risk in the included studies. The six included studies were all clinical RCTs with no significant differences at baseline. All studies only mentioned randomization but did not specifically describe the method of random sequence generation and allocation concealment nor the implementation of blinding of the investigators, participants, and outcome evaluators. Nevertheless, all studies reported expected outcomes with complete outcome data and a low risk of other biases. The risk of bias assessment is detailed in Figures and . 3.4. Meta-Analysis Results 3.4.1. Symptoms of AP 3.4.1.1. Frequency of AP Two studies ( n = 148 participants) reported AP frequency. No significant heterogeneity was found between the two studies ( p = 0.96, I 2 = 0%); thus, the FEM was selected. The meta-analysis results showed that AP frequency in the experimental group was significantly lower than that in the control group (MD: −2.39, 95% CI: −2.77 to −2.02; Z = 12.44, p < 0.00001), as depicted in . Therefore, SGB can substantially reduce AP frequency. 3.4.1.2. Duration of AP Two studies ( n = 148 participants) revealed AP duration. Similar to AP frequency, the FEM was chosen for assessing AP duration due to the nonsignificant heterogeneity ( p = 0.85, I 2 = 0%). Subsequent analysis demonstrated that the experimental group had a significantly shorter AP duration than the control group (MD: −7.16, 95% CI: −7.68 to −6.65; Z = 27.33, p < 0.00001), as presented in . Thus, SGB has a potential positive effect of shortening AP duration. 3.4.1.3. Pain Intensity of AP Two studies ( n = 68 participants) utilized visual analog scale (VAS) scores as a measure of pain intensity in patients with AP. Considering that the VAS scores were reported at multiple time points, we collated the data and conducted a subgroup analysis. Furthermore, the REM was selected owing to the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 93%). Additional analysis showed significant differences in the VAS scores between the experimental and control groups at 24, 72, 120, and 168 h after treatment . These findings imply that SGB can effectively reduce VAS scores and alleviate the pain perception of patients with AP. 3.4.2. ECG Findings 3.4.2.1. HR Two studies ( n = 68 participants) provided HR data in patients with AP. Given that multiple time points were involved, data collation and subgroup analysis were performed. Additionally, the FEM was used for analysis because no significant heterogeneity was detected between the two studies ( p = 0.89, I 2 = 0%). As observed in , the HR significantly differed between the experimental and control groups at 24, 72, 120, and 168 after treatment. Hence, SGB can adequately reduce the HR of patients with AP. 3.4.2.2. Detection Rate of S-T Segment Elevation ≥ 0.1 mV After 24 h of Treatment Two studies ( n = 106 participants) reported the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment. Moreover, considering that the heterogeneity between the studies was relatively small ( p = 0.78, I 2 = 0%), the FEM was chosen for further analysis. The meta-analysis demonstrated that the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment was significantly lower in the experimental group than in the control group (RR: 0.11, 95% CI: 0.03–0.44; Z = 3.10, p = 0.002), as illustrated in . These findings suggest that SGB can significantly improve myocardial ischemic injury in patients with AP. 3.4.2.3. Detection Rate of Abnormal T Waves After 24 h of Treatment Only one study ( n = 38 participants) evaluated the detection rate of abnormal T waves after 24 h of treatment, demonstrating a significantly lower detection rate in the experimental group than in the control group (RR: 0.15, 95% CI: 0.04–0.59; Z = 2.73, p = 0.006). Thus, SGB has the potential to effectively reduce AP-induced myocardial damage. 3.4.2.4. S-T Segment Displacement After Treatment One study ( n = 83 participants) assessed the S-T segment displacement after treatment, revealing a significant difference between the experimental and control groups (MD: −0.07, 95% CI: −0.10 to −0.04; Z = 5.26, p < 0.00001). These findings indicate that SGB is beneficial for recovering cardiac function in patients with AP. 3.4.3. Serum Myocardial Enzyme Level Two studies ( n = 68 participants) provided data on the level of the serum myocardial enzyme cTnI. In light of multiple time points in the data, we conducted collation and subgroup analysis. Furthermore, FEM analysis showed that the overall combined effect size (MD: −0.27, 95% CI: −0.28 to −0.27; Z = 190.49, p < 0.00001) was significantly different between the experimental and control groups. Given the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 100%), REM analysis was performed. This analysis demonstrated an overall combined effect size of MD = −0.30 (95% CI: −0.51 to −0.09; Z = 2.78, p = 0.005), as presented in . These results imply that SGB is a potential treatment strategy to reduce the levels of the serum myocardial enzyme cTnI in patients with AP. 3.4.4. Clinical Efficacy Three studies ( n = 240 participants) reported the clinical efficacy. As illustrated in , FEM analysis ( p = 0.48, I 2 = 0%) exhibited significant differences in the clinical efficacy between the experimental and control groups (RR: 1.27, 95% CI: 1.14–1.43; Z = 4.19, p < 0.0001). Thus, SGB is a promising treatment method for significantly improving clinical efficacy. 3.4.5. Measures of Vital Sign Parameters 3.4.5.1. MAP Two studies ( n = 68 participants) measured MAP levels. Considering that the data had multiple time points, we conducted data collation and subgroup analysis. Subsequent FEM analysis ( p = 0.60, I 2 = 0%) showed that the MAP levels of the experimental group were significantly lower than those of the control group at 24, 72, 120, and 168 h after treatment, as shown in . Hence, SGB is a valuable approach for reducing MAP levels in patients with AP. 3.4.5.2. SpO 2 Two studies ( n = 68 participants) investigated SpO 2 levels. In view of the multiple time points involved, we conducted data collation and subgroup analysis. The FEM analysis ( p = 0.19, I 2 = 30%) demonstrated that the SpO 2 levels in the experimental group were significantly higher than those in the control group at 24, 72, 120, and 168 h after treatment . These observations indicate that SGB can effectively enhance SpO 2 levels in patients with AP. 3.4.6. Disease Improvement The study by Wu et al. ( n = 83 participants) examined the transition rate from UAP to SAP to describe the level of disease improvement. The meta-analysis showed an effect size of RR = 1.69 (95% CI: 1.23–2.32; Z = 3.22, p = 0.001), indicating that SGB could effectively transform UAP into SAP and lead to significant disease improvement. 3.4.7. Postoperative Cardiovascular- and Cerebrovascular-Related Adverse Events 3.4.7.1. Incidence of AMI Two studies ( n = 148 participants) reported AMI incidence during follow-up. Given that no significant heterogeneity was observed between the two studies ( p = 0.55, I 2 = 0%), FEM analysis was performed. The results revealed an overall combined effect size of RR = 0.28 (95% CI: 0.11–0.73; Z = 2.62, p = 0.009), as depicted in . This finding indicates that SGB can significantly reduce AMI incidence in patients with AP. 3.4.7.2. Incidence of Interventional Therapy, Stroke, and Rehospitalization Only one study ( n = 65 participants) assessed the incidence of interventional therapy, stroke, and rehospitalization during follow-up. The meta-analysis findings suggested that SGB could significantly reduce rehospitalization incidence; however, no such statistical significance was observed for interventional therapy and stroke incidence, as shown in . 3.4.7.3. Incidence of Death Two studies ( n = 148 participants) investigated the incidence of death during follow-up. FEM analysis ( p = 0.33, I 2 = 0%) showed no significant differences in the overall combined effect size between the experimental and control groups (RR: 0.28, 95% CI: 0.05–1.76; Z = 1.35, p = 0.18), as illustrated in . Thus, SGB may not reduce death incidence in patients with AP. 3.5. Publication Bias We did not investigate publication bias because fewer than 10 articles were included in this systematic review. 3.6. Safety SGB-related adverse events were not reported in the six included studies. Preliminary evidence suggests that SGB in AP treatment is associated with relatively fewer adverse reactions and higher safety. 3.7. Evaluation of Evidence Quality The quality of evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system. The assessment revealed that the evidence quality ranged from low to very low, which can be attributed to the small sample sizes used in the included studies and the lack of information on allocation concealment and blinding. Of the various outcome indicators , HR, MAP, SpO 2 level, AP frequency, AP duration, the incidence of AMI and rehospitalization, the transition rate from UAP to SAP, the detection rate of S-T segment elevation ≥ 0.1 mV on ECG after 24 h of treatment, the detection rate of abnormal T waves on ECG after 24 h of treatment, S-T segment displacement on ECG after treatment, and clinical efficacy were low-quality evidence. Additionally, cTnI level, VAS score, and the incidence of interventional therapy, stroke, and death were very low-quality evidence. According to the established search strategy, we retrieved 804 literature records, with no additional records. After deleting the duplicates, 516 articles were retained. Among them, 491 were excluded after reviewing the titles and abstracts, leaving 25 for further screening. The 491 articles that were excluded after applying our inclusion and exclusion criteria included six case reports, 22 animal experiments, 143 unrelated to AP, 277 unrelated to SGB, two conference abstracts, one scientific and technological achievement, one study protocol, one cell experiment, one news report, 32 reviews, and five meta-analyses. After further reading the full text of the remaining 25 articles, six non-RCT study designs, two articles with incomplete data, one article without full-text availability, six articles with inconsistent research purposes, three reviews, and one duplicate publication were excluded. Ultimately, six articles were included in the meta-analysis . The literature screening process is illustrated in . The six included studies were all conducted in China and publicly published in Chinese databases in the years 1996–2011. A total of 373 patients with AP and in the age range of 39–81 years were included. Furthermore, > 60% (230) of the patients had UAP, 65 had drug-resistant AP, and 78 had unspecified AP subtypes. The experimental group comprised 193 patients who were treated with SGB alone, SGB combined with drugs, or SGB combined with conventional medical treatment. In the control group, 180 patients received conventional medical treatment or a single-drug intervention. Additionally, no significant differences were observed between the baseline characteristics across all studies, indicating comparable basic information. The characteristics of the included studies are presented in . Finally, one study had clearly misplaced headers and content in two outcome data tables. Hence, we interchanged them appropriately and performed sensitivity analyses to reduce publication bias. The Cochrane risk of bias assessment tool was utilized to assess the bias risk in the included studies. The six included studies were all clinical RCTs with no significant differences at baseline. All studies only mentioned randomization but did not specifically describe the method of random sequence generation and allocation concealment nor the implementation of blinding of the investigators, participants, and outcome evaluators. Nevertheless, all studies reported expected outcomes with complete outcome data and a low risk of other biases. The risk of bias assessment is detailed in Figures and . 3.4.1. Symptoms of AP 3.4.1.1. Frequency of AP Two studies ( n = 148 participants) reported AP frequency. No significant heterogeneity was found between the two studies ( p = 0.96, I 2 = 0%); thus, the FEM was selected. The meta-analysis results showed that AP frequency in the experimental group was significantly lower than that in the control group (MD: −2.39, 95% CI: −2.77 to −2.02; Z = 12.44, p < 0.00001), as depicted in . Therefore, SGB can substantially reduce AP frequency. 3.4.1.2. Duration of AP Two studies ( n = 148 participants) revealed AP duration. Similar to AP frequency, the FEM was chosen for assessing AP duration due to the nonsignificant heterogeneity ( p = 0.85, I 2 = 0%). Subsequent analysis demonstrated that the experimental group had a significantly shorter AP duration than the control group (MD: −7.16, 95% CI: −7.68 to −6.65; Z = 27.33, p < 0.00001), as presented in . Thus, SGB has a potential positive effect of shortening AP duration. 3.4.1.3. Pain Intensity of AP Two studies ( n = 68 participants) utilized visual analog scale (VAS) scores as a measure of pain intensity in patients with AP. Considering that the VAS scores were reported at multiple time points, we collated the data and conducted a subgroup analysis. Furthermore, the REM was selected owing to the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 93%). Additional analysis showed significant differences in the VAS scores between the experimental and control groups at 24, 72, 120, and 168 h after treatment . These findings imply that SGB can effectively reduce VAS scores and alleviate the pain perception of patients with AP. 3.4.2. ECG Findings 3.4.2.1. HR Two studies ( n = 68 participants) provided HR data in patients with AP. Given that multiple time points were involved, data collation and subgroup analysis were performed. Additionally, the FEM was used for analysis because no significant heterogeneity was detected between the two studies ( p = 0.89, I 2 = 0%). As observed in , the HR significantly differed between the experimental and control groups at 24, 72, 120, and 168 after treatment. Hence, SGB can adequately reduce the HR of patients with AP. 3.4.2.2. Detection Rate of S-T Segment Elevation ≥ 0.1 mV After 24 h of Treatment Two studies ( n = 106 participants) reported the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment. Moreover, considering that the heterogeneity between the studies was relatively small ( p = 0.78, I 2 = 0%), the FEM was chosen for further analysis. The meta-analysis demonstrated that the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment was significantly lower in the experimental group than in the control group (RR: 0.11, 95% CI: 0.03–0.44; Z = 3.10, p = 0.002), as illustrated in . These findings suggest that SGB can significantly improve myocardial ischemic injury in patients with AP. 3.4.2.3. Detection Rate of Abnormal T Waves After 24 h of Treatment Only one study ( n = 38 participants) evaluated the detection rate of abnormal T waves after 24 h of treatment, demonstrating a significantly lower detection rate in the experimental group than in the control group (RR: 0.15, 95% CI: 0.04–0.59; Z = 2.73, p = 0.006). Thus, SGB has the potential to effectively reduce AP-induced myocardial damage. 3.4.2.4. S-T Segment Displacement After Treatment One study ( n = 83 participants) assessed the S-T segment displacement after treatment, revealing a significant difference between the experimental and control groups (MD: −0.07, 95% CI: −0.10 to −0.04; Z = 5.26, p < 0.00001). These findings indicate that SGB is beneficial for recovering cardiac function in patients with AP. 3.4.3. Serum Myocardial Enzyme Level Two studies ( n = 68 participants) provided data on the level of the serum myocardial enzyme cTnI. In light of multiple time points in the data, we conducted collation and subgroup analysis. Furthermore, FEM analysis showed that the overall combined effect size (MD: −0.27, 95% CI: −0.28 to −0.27; Z = 190.49, p < 0.00001) was significantly different between the experimental and control groups. Given the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 100%), REM analysis was performed. This analysis demonstrated an overall combined effect size of MD = −0.30 (95% CI: −0.51 to −0.09; Z = 2.78, p = 0.005), as presented in . These results imply that SGB is a potential treatment strategy to reduce the levels of the serum myocardial enzyme cTnI in patients with AP. 3.4.4. Clinical Efficacy Three studies ( n = 240 participants) reported the clinical efficacy. As illustrated in , FEM analysis ( p = 0.48, I 2 = 0%) exhibited significant differences in the clinical efficacy between the experimental and control groups (RR: 1.27, 95% CI: 1.14–1.43; Z = 4.19, p < 0.0001). Thus, SGB is a promising treatment method for significantly improving clinical efficacy. 3.4.5. Measures of Vital Sign Parameters 3.4.5.1. MAP Two studies ( n = 68 participants) measured MAP levels. Considering that the data had multiple time points, we conducted data collation and subgroup analysis. Subsequent FEM analysis ( p = 0.60, I 2 = 0%) showed that the MAP levels of the experimental group were significantly lower than those of the control group at 24, 72, 120, and 168 h after treatment, as shown in . Hence, SGB is a valuable approach for reducing MAP levels in patients with AP. 3.4.5.2. SpO 2 Two studies ( n = 68 participants) investigated SpO 2 levels. In view of the multiple time points involved, we conducted data collation and subgroup analysis. The FEM analysis ( p = 0.19, I 2 = 30%) demonstrated that the SpO 2 levels in the experimental group were significantly higher than those in the control group at 24, 72, 120, and 168 h after treatment . These observations indicate that SGB can effectively enhance SpO 2 levels in patients with AP. 3.4.6. Disease Improvement The study by Wu et al. ( n = 83 participants) examined the transition rate from UAP to SAP to describe the level of disease improvement. The meta-analysis showed an effect size of RR = 1.69 (95% CI: 1.23–2.32; Z = 3.22, p = 0.001), indicating that SGB could effectively transform UAP into SAP and lead to significant disease improvement. 3.4.7. Postoperative Cardiovascular- and Cerebrovascular-Related Adverse Events 3.4.7.1. Incidence of AMI Two studies ( n = 148 participants) reported AMI incidence during follow-up. Given that no significant heterogeneity was observed between the two studies ( p = 0.55, I 2 = 0%), FEM analysis was performed. The results revealed an overall combined effect size of RR = 0.28 (95% CI: 0.11–0.73; Z = 2.62, p = 0.009), as depicted in . This finding indicates that SGB can significantly reduce AMI incidence in patients with AP. 3.4.7.2. Incidence of Interventional Therapy, Stroke, and Rehospitalization Only one study ( n = 65 participants) assessed the incidence of interventional therapy, stroke, and rehospitalization during follow-up. The meta-analysis findings suggested that SGB could significantly reduce rehospitalization incidence; however, no such statistical significance was observed for interventional therapy and stroke incidence, as shown in . 3.4.7.3. Incidence of Death Two studies ( n = 148 participants) investigated the incidence of death during follow-up. FEM analysis ( p = 0.33, I 2 = 0%) showed no significant differences in the overall combined effect size between the experimental and control groups (RR: 0.28, 95% CI: 0.05–1.76; Z = 1.35, p = 0.18), as illustrated in . Thus, SGB may not reduce death incidence in patients with AP. 3.4.1.1. Frequency of AP Two studies ( n = 148 participants) reported AP frequency. No significant heterogeneity was found between the two studies ( p = 0.96, I 2 = 0%); thus, the FEM was selected. The meta-analysis results showed that AP frequency in the experimental group was significantly lower than that in the control group (MD: −2.39, 95% CI: −2.77 to −2.02; Z = 12.44, p < 0.00001), as depicted in . Therefore, SGB can substantially reduce AP frequency. 3.4.1.2. Duration of AP Two studies ( n = 148 participants) revealed AP duration. Similar to AP frequency, the FEM was chosen for assessing AP duration due to the nonsignificant heterogeneity ( p = 0.85, I 2 = 0%). Subsequent analysis demonstrated that the experimental group had a significantly shorter AP duration than the control group (MD: −7.16, 95% CI: −7.68 to −6.65; Z = 27.33, p < 0.00001), as presented in . Thus, SGB has a potential positive effect of shortening AP duration. 3.4.1.3. Pain Intensity of AP Two studies ( n = 68 participants) utilized visual analog scale (VAS) scores as a measure of pain intensity in patients with AP. Considering that the VAS scores were reported at multiple time points, we collated the data and conducted a subgroup analysis. Furthermore, the REM was selected owing to the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 93%). Additional analysis showed significant differences in the VAS scores between the experimental and control groups at 24, 72, 120, and 168 h after treatment . These findings imply that SGB can effectively reduce VAS scores and alleviate the pain perception of patients with AP. Two studies ( n = 148 participants) reported AP frequency. No significant heterogeneity was found between the two studies ( p = 0.96, I 2 = 0%); thus, the FEM was selected. The meta-analysis results showed that AP frequency in the experimental group was significantly lower than that in the control group (MD: −2.39, 95% CI: −2.77 to −2.02; Z = 12.44, p < 0.00001), as depicted in . Therefore, SGB can substantially reduce AP frequency. Two studies ( n = 148 participants) revealed AP duration. Similar to AP frequency, the FEM was chosen for assessing AP duration due to the nonsignificant heterogeneity ( p = 0.85, I 2 = 0%). Subsequent analysis demonstrated that the experimental group had a significantly shorter AP duration than the control group (MD: −7.16, 95% CI: −7.68 to −6.65; Z = 27.33, p < 0.00001), as presented in . Thus, SGB has a potential positive effect of shortening AP duration. Two studies ( n = 68 participants) utilized visual analog scale (VAS) scores as a measure of pain intensity in patients with AP. Considering that the VAS scores were reported at multiple time points, we collated the data and conducted a subgroup analysis. Furthermore, the REM was selected owing to the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 93%). Additional analysis showed significant differences in the VAS scores between the experimental and control groups at 24, 72, 120, and 168 h after treatment . These findings imply that SGB can effectively reduce VAS scores and alleviate the pain perception of patients with AP. 3.4.2.1. HR Two studies ( n = 68 participants) provided HR data in patients with AP. Given that multiple time points were involved, data collation and subgroup analysis were performed. Additionally, the FEM was used for analysis because no significant heterogeneity was detected between the two studies ( p = 0.89, I 2 = 0%). As observed in , the HR significantly differed between the experimental and control groups at 24, 72, 120, and 168 after treatment. Hence, SGB can adequately reduce the HR of patients with AP. 3.4.2.2. Detection Rate of S-T Segment Elevation ≥ 0.1 mV After 24 h of Treatment Two studies ( n = 106 participants) reported the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment. Moreover, considering that the heterogeneity between the studies was relatively small ( p = 0.78, I 2 = 0%), the FEM was chosen for further analysis. The meta-analysis demonstrated that the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment was significantly lower in the experimental group than in the control group (RR: 0.11, 95% CI: 0.03–0.44; Z = 3.10, p = 0.002), as illustrated in . These findings suggest that SGB can significantly improve myocardial ischemic injury in patients with AP. 3.4.2.3. Detection Rate of Abnormal T Waves After 24 h of Treatment Only one study ( n = 38 participants) evaluated the detection rate of abnormal T waves after 24 h of treatment, demonstrating a significantly lower detection rate in the experimental group than in the control group (RR: 0.15, 95% CI: 0.04–0.59; Z = 2.73, p = 0.006). Thus, SGB has the potential to effectively reduce AP-induced myocardial damage. 3.4.2.4. S-T Segment Displacement After Treatment One study ( n = 83 participants) assessed the S-T segment displacement after treatment, revealing a significant difference between the experimental and control groups (MD: −0.07, 95% CI: −0.10 to −0.04; Z = 5.26, p < 0.00001). These findings indicate that SGB is beneficial for recovering cardiac function in patients with AP. Two studies ( n = 68 participants) provided HR data in patients with AP. Given that multiple time points were involved, data collation and subgroup analysis were performed. Additionally, the FEM was used for analysis because no significant heterogeneity was detected between the two studies ( p = 0.89, I 2 = 0%). As observed in , the HR significantly differed between the experimental and control groups at 24, 72, 120, and 168 after treatment. Hence, SGB can adequately reduce the HR of patients with AP. Two studies ( n = 106 participants) reported the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment. Moreover, considering that the heterogeneity between the studies was relatively small ( p = 0.78, I 2 = 0%), the FEM was chosen for further analysis. The meta-analysis demonstrated that the detection rate of S-T segment elevation ≥ 0.1 mV after 24 h of treatment was significantly lower in the experimental group than in the control group (RR: 0.11, 95% CI: 0.03–0.44; Z = 3.10, p = 0.002), as illustrated in . These findings suggest that SGB can significantly improve myocardial ischemic injury in patients with AP. Only one study ( n = 38 participants) evaluated the detection rate of abnormal T waves after 24 h of treatment, demonstrating a significantly lower detection rate in the experimental group than in the control group (RR: 0.15, 95% CI: 0.04–0.59; Z = 2.73, p = 0.006). Thus, SGB has the potential to effectively reduce AP-induced myocardial damage. One study ( n = 83 participants) assessed the S-T segment displacement after treatment, revealing a significant difference between the experimental and control groups (MD: −0.07, 95% CI: −0.10 to −0.04; Z = 5.26, p < 0.00001). These findings indicate that SGB is beneficial for recovering cardiac function in patients with AP. Two studies ( n = 68 participants) provided data on the level of the serum myocardial enzyme cTnI. In light of multiple time points in the data, we conducted collation and subgroup analysis. Furthermore, FEM analysis showed that the overall combined effect size (MD: −0.27, 95% CI: −0.28 to −0.27; Z = 190.49, p < 0.00001) was significantly different between the experimental and control groups. Given the significant heterogeneity between the two studies ( p < 0.00001, I 2 = 100%), REM analysis was performed. This analysis demonstrated an overall combined effect size of MD = −0.30 (95% CI: −0.51 to −0.09; Z = 2.78, p = 0.005), as presented in . These results imply that SGB is a potential treatment strategy to reduce the levels of the serum myocardial enzyme cTnI in patients with AP. Three studies ( n = 240 participants) reported the clinical efficacy. As illustrated in , FEM analysis ( p = 0.48, I 2 = 0%) exhibited significant differences in the clinical efficacy between the experimental and control groups (RR: 1.27, 95% CI: 1.14–1.43; Z = 4.19, p < 0.0001). Thus, SGB is a promising treatment method for significantly improving clinical efficacy. 3.4.5.1. MAP Two studies ( n = 68 participants) measured MAP levels. Considering that the data had multiple time points, we conducted data collation and subgroup analysis. Subsequent FEM analysis ( p = 0.60, I 2 = 0%) showed that the MAP levels of the experimental group were significantly lower than those of the control group at 24, 72, 120, and 168 h after treatment, as shown in . Hence, SGB is a valuable approach for reducing MAP levels in patients with AP. 3.4.5.2. SpO 2 Two studies ( n = 68 participants) investigated SpO 2 levels. In view of the multiple time points involved, we conducted data collation and subgroup analysis. The FEM analysis ( p = 0.19, I 2 = 30%) demonstrated that the SpO 2 levels in the experimental group were significantly higher than those in the control group at 24, 72, 120, and 168 h after treatment . These observations indicate that SGB can effectively enhance SpO 2 levels in patients with AP. Two studies ( n = 68 participants) measured MAP levels. Considering that the data had multiple time points, we conducted data collation and subgroup analysis. Subsequent FEM analysis ( p = 0.60, I 2 = 0%) showed that the MAP levels of the experimental group were significantly lower than those of the control group at 24, 72, 120, and 168 h after treatment, as shown in . Hence, SGB is a valuable approach for reducing MAP levels in patients with AP. 2 Two studies ( n = 68 participants) investigated SpO 2 levels. In view of the multiple time points involved, we conducted data collation and subgroup analysis. The FEM analysis ( p = 0.19, I 2 = 30%) demonstrated that the SpO 2 levels in the experimental group were significantly higher than those in the control group at 24, 72, 120, and 168 h after treatment . These observations indicate that SGB can effectively enhance SpO 2 levels in patients with AP. The study by Wu et al. ( n = 83 participants) examined the transition rate from UAP to SAP to describe the level of disease improvement. The meta-analysis showed an effect size of RR = 1.69 (95% CI: 1.23–2.32; Z = 3.22, p = 0.001), indicating that SGB could effectively transform UAP into SAP and lead to significant disease improvement. 3.4.7.1. Incidence of AMI Two studies ( n = 148 participants) reported AMI incidence during follow-up. Given that no significant heterogeneity was observed between the two studies ( p = 0.55, I 2 = 0%), FEM analysis was performed. The results revealed an overall combined effect size of RR = 0.28 (95% CI: 0.11–0.73; Z = 2.62, p = 0.009), as depicted in . This finding indicates that SGB can significantly reduce AMI incidence in patients with AP. 3.4.7.2. Incidence of Interventional Therapy, Stroke, and Rehospitalization Only one study ( n = 65 participants) assessed the incidence of interventional therapy, stroke, and rehospitalization during follow-up. The meta-analysis findings suggested that SGB could significantly reduce rehospitalization incidence; however, no such statistical significance was observed for interventional therapy and stroke incidence, as shown in . 3.4.7.3. Incidence of Death Two studies ( n = 148 participants) investigated the incidence of death during follow-up. FEM analysis ( p = 0.33, I 2 = 0%) showed no significant differences in the overall combined effect size between the experimental and control groups (RR: 0.28, 95% CI: 0.05–1.76; Z = 1.35, p = 0.18), as illustrated in . Thus, SGB may not reduce death incidence in patients with AP. Two studies ( n = 148 participants) reported AMI incidence during follow-up. Given that no significant heterogeneity was observed between the two studies ( p = 0.55, I 2 = 0%), FEM analysis was performed. The results revealed an overall combined effect size of RR = 0.28 (95% CI: 0.11–0.73; Z = 2.62, p = 0.009), as depicted in . This finding indicates that SGB can significantly reduce AMI incidence in patients with AP. Only one study ( n = 65 participants) assessed the incidence of interventional therapy, stroke, and rehospitalization during follow-up. The meta-analysis findings suggested that SGB could significantly reduce rehospitalization incidence; however, no such statistical significance was observed for interventional therapy and stroke incidence, as shown in . Two studies ( n = 148 participants) investigated the incidence of death during follow-up. FEM analysis ( p = 0.33, I 2 = 0%) showed no significant differences in the overall combined effect size between the experimental and control groups (RR: 0.28, 95% CI: 0.05–1.76; Z = 1.35, p = 0.18), as illustrated in . Thus, SGB may not reduce death incidence in patients with AP. We did not investigate publication bias because fewer than 10 articles were included in this systematic review. SGB-related adverse events were not reported in the six included studies. Preliminary evidence suggests that SGB in AP treatment is associated with relatively fewer adverse reactions and higher safety. The quality of evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system. The assessment revealed that the evidence quality ranged from low to very low, which can be attributed to the small sample sizes used in the included studies and the lack of information on allocation concealment and blinding. Of the various outcome indicators , HR, MAP, SpO 2 level, AP frequency, AP duration, the incidence of AMI and rehospitalization, the transition rate from UAP to SAP, the detection rate of S-T segment elevation ≥ 0.1 mV on ECG after 24 h of treatment, the detection rate of abnormal T waves on ECG after 24 h of treatment, S-T segment displacement on ECG after treatment, and clinical efficacy were low-quality evidence. Additionally, cTnI level, VAS score, and the incidence of interventional therapy, stroke, and death were very low-quality evidence. AP is a prevalent progressive heart disease, which has been gradually increasing in incidence, particularly in the younger population. The pain in AP not only stems from the ischemic chest pain triggered by reduced coronary perfusion, but it is also closely associated with sympathetic overexcitation . Cardiac sensations are primarily conveyed by visceral sensory nerves. Myocardial ischemia and hypoxia can lead to the heightened excitation of the cardiac sympathetic nerves, which transmit the signals to the amygdala, hypothalamus, and insula to ultimately induce pain in the patients. Additionally, the excitation of the cardiac sympathetic nerves can cause cardiovascular contraction, further aggravating ischemia and hypoxia in the ischemic area of the heart and generating a vicious cycle of ischemia and pain . Ischemic stimulation can also induce the increased production of norepinephrine (NE), nerve growth factors (NGFs), and inflammatory factors, thereby triggering stress and inflammatory responses that further exacerbate myocardial injury and pain perception . Therefore, a safe and efficient approach for blocking the transmission of pain signals and the production of pain-eliciting substances is crucial in AP treatment. SGB can be performed unilaterally or bilaterally alternating, with a successful procedure reflected by Horner syndrome on the ipsilateral side. This treatment mainly regulates cardiovascular movement, pain transmission, and glandular secretion in the distribution area by inhibiting the function of the preganglionic and postganglionic fibers of the SG. SGB also modulates the functional activities of the autonomic nervous system, endocrine system, and immune system via the hypothalamic mechanism. In recent years, the application of ultrasound imaging technology has augmented the visualization, precision, and safety of SGB and notably reduced adverse events such as neurovascular damage caused by earlier blind exploration in the SGB procedure . Consequently, ultrasound-guided SGB has become one of the most commonly used strategies for clinical pain treatment . Prior studies have highlighted that SGB is primarily employed in AP treatment to inhibit the activity of the sympathetic nervous system and induce the following effects : (1) effective alleviation of ischemic chest pain by dilating the coronary artery, increasing cardiac blood perfusion, improving myocardial blood and oxygen supply, and accelerating metabolite excretion; (2) protection of cardiac function by inhibiting overexcited cardiac sympathetic nerves to achieve reduced HR, myocardial contractility, and myocardial oxygen consumption; (3) blocking of the pain signal afference by cardiac sympathetic nerves; (4) regulating the stages of pain processing in the central nervous system and alleviating pain perception by affecting the amygdala, hypothalamus, and insula that have neuronal connections with the SG; and (5) effectively diminishing the cardiac inflammatory response and stress myocardial injury and relieving AP by decreasing the production of pain-inducing substances, such as NGFs, NE, neuropeptides, and inflammatory factors. In this study, we conducted a systematic review and meta-analysis of six RCT studies (including 373 participants) to evaluate the efficacy and safety of SGB for AP treatment. For this purpose, we examined the differences in the outcome indicators, such as symptoms, ECG findings, serum myocardial enzyme levels, and clinical efficacy, between the experimental and control groups. Our results revealed that SGB significantly reduced VAS scores, AP frequency and duration, HR, the detection rate of S-T segment elevation ≥ 0.1 mV, the presence of abnormal T waves on ECG after 24 h of treatment, and the level of the serum myocardial enzyme cTnI, while it improved clinical efficacy. Moreover, the SpO 2 levels and the transition rates from UAP to SAP were significantly higher, and the MAP level and the incidence of AMI and rehospitalization were significantly lower in the experimental group than in the control group. Conversely, no significant differences were observed in the incidence of interventional therapy, stroke, and death between the two groups. Therefore, these findings indicate that SGB can effectively treat AP by significantly improving disease symptoms and ECG findings, alleviating myocardial injury, promoting cardiac function recovery, and lowering the occurrence rate of some cardiovascular- and cerebrovascular-related adverse events. However, SGB may not prevent death, stroke, and interventional therapy in patients with AP. The overall methodological quality was moderate based on the Cochrane Handbook items. According to the GRADE evidence rating system, the overall quality of the outcome indicators was low, with 12 (70.59%) indicators identified as low-quality evidence and five (29.41%) as very low-quality evidence. Additionally, given the limited number of included studies and small sample sizes, the obtained results require further validation. Although the absence of SGB-related adverse events in all included studies preliminarily suggests that SGB treatment for AP leads to fewer adverse reactions and increased safety, more high-quality evidence is required in the future to verify this conclusion. Our systematic review and meta-analysis study has a few limitations that should be considered. (1) The number and sample sizes of the included studies were small, and the outcome indicators were found to be low- or very low-quality evidence, all of which might have rendered the statistical analysis results unreliable. Therefore, future high-quality and large-sample clinical RCTs are essential and may prominently influence the existing evidence and alter the assessment results. (2) The included studies were published as early as 1996, with most conducted from 2006 to 2011. Hence, the included studies may not accurately reflect the current research status of SGB technology. Moreover, five of the included studies used blind exploration rather than ultrasound guidance for SGB, while another study even used an earlier electrotherapy technology. These less advanced surgical methods may have led to the underestimation of the efficacy and safety of SGB for AP treatment. (3) The overall methodological quality of the included studies was moderate, with limitations such as suboptimal implementation of allocation concealment and blinding. Additionally, blinding of the investigators and participants may have been challenging because SGB is an invasive treatment, which could have further led to biased results. (4) Certain outcome indicators exhibited significant heterogeneity, possibly due to complex factors such as the methodological quality, sample size, and control interventions of the included studies, along with the type of AP and the age, comorbid condition, and physical health of the participants. Although all the control group interventions comprised conventional medical treatments, their specific regimens may have varied depending on the individual patients. Moreover, the surgical method and point of intervention (unilateral or bilateral) employed in the SGB procedure and the proficiency and accuracy of the operators may have contributed to the heterogeneity in the outcome indicators. Finally, this study provided some prospects and suggestions for future research. First, multicenter, high-quality, and large-sample RCTs employing the latest surgical methods for SGB (such as ultrasound-guided SGB) for AP treatment are urgently required to verify our results and provide an accurate perspective on the therapeutic effect of SGB on AP at the current technological level. Second, safety should be considered a decisive factor in determining the clinical application value of an intervention. Therefore, future studies should equally focus on observing and accurately recording the safety-related indicators associated with SGB in AP treatment, with maximum possible follow-up to supplement the safety evaluation. Third, the prospective registration of clinical RCT studies is essential for effectively improving research transparency, reducing the risk of publication bias, and avoiding the duplication of effort and resource wastage. Lastly, RCTs should strictly adhere to standardized reporting guidelines, including the Consolidated Standards of Reporting Trials (CONSORT), to ensure research with high-quality methodologies. Our systematic review and meta-analysis study suggests that SGB in patients with AP can safely and effectively improve disease symptoms and ECG findings, alleviate myocardial injury, promote cardiac function recovery, and reduce the incidence of AMI and rehospitalization. However, considering the limitations in this meta-analysis, more high-quality RCT studies are essential to validate these conclusions. |
Long-term impact of molecular epidemiology shifts of methicillin-resistant | 73c8330d-6483-41b7-b380-6b9e0ee9c761 | 11727054 | Biochemistry[mh] | Bloodstream infection (BSI) is a severe condition that can lead to sepsis, a systemic inflammatory response. Therefore, administering proper treatment as early as possible is crucial. However, when drug-resistant bacteria cause BSI, appropriate treatment may be delayed. Methicillin-resistant Staphylococcus aureus (MRSA) is the most common causative multidrug-resistant pathogen of BSI. In Japan and the United States, the predominant MRSA strain was the New York/Japan clone, which is defined as ST5 harboring the SCC mec II island (ST5-MRSA-II) and exhibiting multidrug resistance. However, our previous report revealed a decrease in SCC mec type II from 79.2% between 2003 and 2007 to 44.9% between 2008 and 2011; conversely, SCC mec types I and IV increased. According to nationwide surveillance conducted in 2019, ST8 carrying SCC mec type IV (ST8-MRSA-IV) and clonal complex 1 carrying SCC mec type IV (CC1-MRSA-IV) were the predominant molecular types in BSI. Additionally, ST8-MRSA-IV is subdivided into several groups based on molecular characteristics. However, owing to the limited number of MRSA strains detected at each facility during the surveillance, it remains undetermined whether similar trends are observed in individual facilities. Furthermore, the differences in the backgrounds of patients infected with various MRSA types remain unknown. Therefore, in this study, we aimed to achieve three main objectives. First, by incorporating the results of our two previous reports, we aimed to investigate the long-term changes in SCC mec types, as well as changes in patient characteristics and clinical outcomes from 2003 to 2019. Second, using MRSA strains isolated from 2012 to 2019, we performed whole-genome sequencing (WGS) to analyze the molecular epidemiological characteristics of MRSA detected in BSIs at our hospital since 2012 and to assess how these changes align with the trends observed in the 2019 nationwide surveillance. Additionally, we investigated the difference in patient background, clinical outcomes, and positive rate for drug-resistance genes and virulence genes among the major MRSA types. Study design This retrospective observational study was conducted at Nagasaki University Hospital, Nagasaki, Japan, from January 2012 to December 2019. Nagasaki University Hospital is a tertiary medical institution with 874 beds in Nagasaki Prefecture, the westernmost part of Japan. The number of MRSA detected from blood cultures per 1,000 patient days was not changed between 2003 and 2019 (0.148 and 0.145, respectively), while the number of methicillin-susceptance Staphylococcus aureus (MSSA) per 1,000 patient days increased from 0.042 in 2003 to 0.257 in 2019 (Supplementary Figure 1). In Nagasaki University Hospital, screening and decolonization for MRSA were performed at the discretion of each the Intensive Care Unit (ICU), Neonatal Intensive Care Unit (NICU)/General Care Unit (GCU), Orthopedics, and Cardiovascular Surgery. The infection control team did not proactively intervene in MRSA treatment; instead, it intervened only upon consultation by the principal physicians. No special infection control measures were implemented solely based on the detection of MRSA. However, if an outbreak was suspected, the infection control team investigated to identify the source and implemented appropriate measures. Patient data, including blood culture collection location (outpatient or inpatient), the day of hospitalization blood culture collection, antimicrobial use during the 30 days before the onset of BSI, initial antimicrobial regimens for BSI, mortality, and the details required to calculate the Charlson comorbidity index and Sequential Organ Failure Assessment (SOFA) score were collected from the medical records. MRSA BSI was classified as community-acquired, healthcare-associated, or hospital-acquired following the previous report. MRSA strains isolated from blood cultures from 2012 to 2019 were subjected to antimicrobial susceptibility testing (AST) and whole-genome sequencing (WGS). To evaluate long-term trends in SCC mec types, patient characteristics, and clinical outcomes, data from 2003 to 2011 were retrieved from the databases used in our previous studies. To match the study period of previous studies with that of this study, the data obtained in this study was analyzed separately for four years, 2012–2015 and 2016–2019. Strain collection In the previous studies conducted between January 2003 and December 2011, MRSA strains detected in one or more blood cultures were analyzed because collecting more than two sets of blood cultures was uncommon at our hospital. However, collecting two or more sets of blood cultures has become more common since the 2010s. Therefore, in this study, we analyzed MRSA strains detected in two or more blood cultures between January 2012 and December 2019 using the same inclusion criteria for nationwide surveillance. To avoid redundancy, only the first isolate identified was included in the analysis when multiple isolates were detected from the same patient during a single hospitalization. Antimicrobial susceptibility testing We measured the minimum inhibitory concentrations (MICs) using broth microdilution testing using a dry plate (Eiken, Tokyo, Japan), according to the manufacturer’s instructions. Antimicrobial susceptibility was determined according to the Clinical and Laboratory Standards Institute guidelines (CLSI M100-Ed31) and the European Committee on Antimicrobial Susceptibility Testing Version 11.0. Whole-genome sequencing All procedures were performed according to the respective manufacturer’s instructions. After extracting the DNA from MRSA strains using a Quick-DNA Fungal/Bacterial Kit (ZYMO RESEARCH, Irvine, CA, United States), we performed WGS on the MRSA strains isolated between 2012 and 2015 using an Ion PGM HiQ View OT2 Kit (Thermo Fisher Scientific, Waltham, MA, United States). Enriched samples were loaded onto an Ion 318 chip and sequenced using an IonTorrent Personal Genome Analyzer with an Ion PGM HiQ View Sequencing Kit (Thermo Fisher Scientific). DNA libraries were generated for Ion PGM using the Ion Xpress Plus Fragment Library Kit (4,471,269; Thermo Fisher Scientific). We performed WGS for the MRSA strains isolated between 2016 and 2019 using the MiSeq system (Illumina, San Diego, CA, United States) and MiSeq Reagent kit v3 (600 cycles) (Illumina). We also performed WGS for MRSA strains isolated between 2012 and 2015, which needed to be retested using the MiSeq system. We generated DNA libraries for the Miseq system using Invitrogen (Waltham, MA, United States) Collibri ES DNA Library Prep Kit for Illumina (A38607096, ThermoFisher Scientific). Analysis of molecular characteristics Sequence data were assembled using the CLC Genomics Workbench Microbial Genomics Module (Qiagen, Venlo, Netherlands). Multilocus sequence type (MLST) was determined using PubMLST ( https://pubmlst.org).10 We determined the SCC mec and spa types using SCCmecFinder (ver1.2) and spaTyper (software version 1.0 and database version 2023-6-19) on the Center for Genomic Epidemiology website ( https://www.genomicepidemiology.org ), respectively. Resistance and virulence genes were detected using ResFinder (software version 2023-08-22 and database version 2023-04-12) and VirulenceFinder (software version 2.0.3 and database version 2022-12-02) on the Center for Genomic Epidemiology website ( https://www.genomicepidemiology.org ), respectively. Our previous study revealed very similar bacteriological characteristics between ST1-carrying SCC mec type IV (ST1-MRSA-IV) and ST2725-carrying SCC mec type IV (ST2725-MRSA-IV); further, a preliminary investigation as part of the current study showed that ST1-MRSA-IV, ST2725-MRSA-IV, and ST5213-MRSA-IV had the same characteristics. Hence, we combined the data from ST1-MRSA-IV, ST2725-MRSA-IV, and ST5213-MRSA-IV as CC1-MRSA-IV. Core-genome MLST (cgMLST) was performed using Ridom SeqSphere+ v.9.0.10 (Ridom GmbH, Münster, Germany). A minimum spanning tree (MST) or unweighted pair group method with arithmetic mean (UPGMA) tree was created based on MLST, cgMLST, and S. aureus accessories using Ridom SeqSphere+ (ver. 9). Samples with more than 10% missing values among the data used for the distance calculation were excluded from constructing the MST or UPGMA. For the UPGMA using strains that were detected during the previous nationwide surveillance, we excluded 13 strains detected at Nagasaki University Hospital to avoid duplication. In UPGMA, a cluster of MRSA strains was classified with a distance of 0.1, and subclusters of ST8 with SCC mec type I (ST8-MRSA-I) and ST8-MRSA-IV, CC1-MRSA-IV, and ST5-MRSA-II were classified with a distance of 0.03. Statistical analysis All statistical analyses except q-value calculation were performed on GraphPad Prism 10 (version 10.1.0; GraphPad Software, Boston, MA, United States of America). Continuous variables were expressed as mean ± standard deviation. Categorical variables were compared using Fisher’s exact test, and q-values were calculated using the Benjamini–Hochberg method for multiple comparisons. Continuous variables were compared using Student’s t-test for two-group comparisons and Tukey’s test for multiple comparisons. The statistical significance level was set at q < 0.05, and the q -values greater than 0.2 were noted as not significant (n.s.). Ethics This study was approved by the Ethics Committee of the Nagasaki University Hospital (20072018). MRSA strains collected from blood cultures were anonymized and individually numbered. Patient information collected from the medical records was also anonymized and individually numbered when. The Ethics Committee of Nagasaki University Hospital waived the requirement for informed consent. This retrospective observational study was conducted at Nagasaki University Hospital, Nagasaki, Japan, from January 2012 to December 2019. Nagasaki University Hospital is a tertiary medical institution with 874 beds in Nagasaki Prefecture, the westernmost part of Japan. The number of MRSA detected from blood cultures per 1,000 patient days was not changed between 2003 and 2019 (0.148 and 0.145, respectively), while the number of methicillin-susceptance Staphylococcus aureus (MSSA) per 1,000 patient days increased from 0.042 in 2003 to 0.257 in 2019 (Supplementary Figure 1). In Nagasaki University Hospital, screening and decolonization for MRSA were performed at the discretion of each the Intensive Care Unit (ICU), Neonatal Intensive Care Unit (NICU)/General Care Unit (GCU), Orthopedics, and Cardiovascular Surgery. The infection control team did not proactively intervene in MRSA treatment; instead, it intervened only upon consultation by the principal physicians. No special infection control measures were implemented solely based on the detection of MRSA. However, if an outbreak was suspected, the infection control team investigated to identify the source and implemented appropriate measures. Patient data, including blood culture collection location (outpatient or inpatient), the day of hospitalization blood culture collection, antimicrobial use during the 30 days before the onset of BSI, initial antimicrobial regimens for BSI, mortality, and the details required to calculate the Charlson comorbidity index and Sequential Organ Failure Assessment (SOFA) score were collected from the medical records. MRSA BSI was classified as community-acquired, healthcare-associated, or hospital-acquired following the previous report. MRSA strains isolated from blood cultures from 2012 to 2019 were subjected to antimicrobial susceptibility testing (AST) and whole-genome sequencing (WGS). To evaluate long-term trends in SCC mec types, patient characteristics, and clinical outcomes, data from 2003 to 2011 were retrieved from the databases used in our previous studies. To match the study period of previous studies with that of this study, the data obtained in this study was analyzed separately for four years, 2012–2015 and 2016–2019. In the previous studies conducted between January 2003 and December 2011, MRSA strains detected in one or more blood cultures were analyzed because collecting more than two sets of blood cultures was uncommon at our hospital. However, collecting two or more sets of blood cultures has become more common since the 2010s. Therefore, in this study, we analyzed MRSA strains detected in two or more blood cultures between January 2012 and December 2019 using the same inclusion criteria for nationwide surveillance. To avoid redundancy, only the first isolate identified was included in the analysis when multiple isolates were detected from the same patient during a single hospitalization. We measured the minimum inhibitory concentrations (MICs) using broth microdilution testing using a dry plate (Eiken, Tokyo, Japan), according to the manufacturer’s instructions. Antimicrobial susceptibility was determined according to the Clinical and Laboratory Standards Institute guidelines (CLSI M100-Ed31) and the European Committee on Antimicrobial Susceptibility Testing Version 11.0. All procedures were performed according to the respective manufacturer’s instructions. After extracting the DNA from MRSA strains using a Quick-DNA Fungal/Bacterial Kit (ZYMO RESEARCH, Irvine, CA, United States), we performed WGS on the MRSA strains isolated between 2012 and 2015 using an Ion PGM HiQ View OT2 Kit (Thermo Fisher Scientific, Waltham, MA, United States). Enriched samples were loaded onto an Ion 318 chip and sequenced using an IonTorrent Personal Genome Analyzer with an Ion PGM HiQ View Sequencing Kit (Thermo Fisher Scientific). DNA libraries were generated for Ion PGM using the Ion Xpress Plus Fragment Library Kit (4,471,269; Thermo Fisher Scientific). We performed WGS for the MRSA strains isolated between 2016 and 2019 using the MiSeq system (Illumina, San Diego, CA, United States) and MiSeq Reagent kit v3 (600 cycles) (Illumina). We also performed WGS for MRSA strains isolated between 2012 and 2015, which needed to be retested using the MiSeq system. We generated DNA libraries for the Miseq system using Invitrogen (Waltham, MA, United States) Collibri ES DNA Library Prep Kit for Illumina (A38607096, ThermoFisher Scientific). Sequence data were assembled using the CLC Genomics Workbench Microbial Genomics Module (Qiagen, Venlo, Netherlands). Multilocus sequence type (MLST) was determined using PubMLST ( https://pubmlst.org).10 We determined the SCC mec and spa types using SCCmecFinder (ver1.2) and spaTyper (software version 1.0 and database version 2023-6-19) on the Center for Genomic Epidemiology website ( https://www.genomicepidemiology.org ), respectively. Resistance and virulence genes were detected using ResFinder (software version 2023-08-22 and database version 2023-04-12) and VirulenceFinder (software version 2.0.3 and database version 2022-12-02) on the Center for Genomic Epidemiology website ( https://www.genomicepidemiology.org ), respectively. Our previous study revealed very similar bacteriological characteristics between ST1-carrying SCC mec type IV (ST1-MRSA-IV) and ST2725-carrying SCC mec type IV (ST2725-MRSA-IV); further, a preliminary investigation as part of the current study showed that ST1-MRSA-IV, ST2725-MRSA-IV, and ST5213-MRSA-IV had the same characteristics. Hence, we combined the data from ST1-MRSA-IV, ST2725-MRSA-IV, and ST5213-MRSA-IV as CC1-MRSA-IV. Core-genome MLST (cgMLST) was performed using Ridom SeqSphere+ v.9.0.10 (Ridom GmbH, Münster, Germany). A minimum spanning tree (MST) or unweighted pair group method with arithmetic mean (UPGMA) tree was created based on MLST, cgMLST, and S. aureus accessories using Ridom SeqSphere+ (ver. 9). Samples with more than 10% missing values among the data used for the distance calculation were excluded from constructing the MST or UPGMA. For the UPGMA using strains that were detected during the previous nationwide surveillance, we excluded 13 strains detected at Nagasaki University Hospital to avoid duplication. In UPGMA, a cluster of MRSA strains was classified with a distance of 0.1, and subclusters of ST8 with SCC mec type I (ST8-MRSA-I) and ST8-MRSA-IV, CC1-MRSA-IV, and ST5-MRSA-II were classified with a distance of 0.03. All statistical analyses except q-value calculation were performed on GraphPad Prism 10 (version 10.1.0; GraphPad Software, Boston, MA, United States of America). Continuous variables were expressed as mean ± standard deviation. Categorical variables were compared using Fisher’s exact test, and q-values were calculated using the Benjamini–Hochberg method for multiple comparisons. Continuous variables were compared using Student’s t-test for two-group comparisons and Tukey’s test for multiple comparisons. The statistical significance level was set at q < 0.05, and the q -values greater than 0.2 were noted as not significant (n.s.). This study was approved by the Ethics Committee of the Nagasaki University Hospital (20072018). MRSA strains collected from blood cultures were anonymized and individually numbered. Patient information collected from the medical records was also anonymized and individually numbered when. The Ethics Committee of Nagasaki University Hospital waived the requirement for informed consent. Changes in SCCmec type, patient characteristics, and outcomes from 2003 to 2019 Twenty-seven and fifty-eight MRSA strains were isolated during 2012–2015 and 2016–2019, respectively. The percentages of SCC mec types I, II, and IV during 2012–2015 were 37.0%, 37.0%, and 25.9%, respectively. The percentages of SCC mec types I, II, IV, and V during 2016–2019 were 15.5%, 15.5%, 65.5%, and 3.4%, respectively. and Supplementary Table 1 show the changes in SCC mec type from 2003 to 2019. SCC mec type II was the most frequently detected type from 2003 to 2015. However, its frequency decreased significantly from 79.2% during 2003–2007 to 15.5% during 2016–2019 ( q < 0.001). In contrast, the percentage of SCC mec type IV increased dramatically from 18.2% during 2003–2007 to 65.5% during 2016–2019 ( q < 0.001). SCC mec type I showed changes that differed from those of SCC mec types II and IV. The percentage of SCC mec type I cases increased significantly from 2.6% during 2003–2007 to 37.0% during 2012–2015 ( q < 0.001). SCC mec type I was one of the most frequently detected SCC mec types from 2012 to 2015; however, its prevalence decreased by more than 50% (15.5%) during 2016–2019. and Supplementary Table 1 show the characteristics of patients in each study period. There were no significant differences in the patients’ backgrounds, such as age, sex, or underlying diseases, except for the classification of infection; the percentage of healthcare-associated infections was significantly higher during 2012–2015 than during 2008–2011 (3.6% vs. 22.2%, q = 0 .020). Further, the sources of MRSA infection changed from 2003 to 2019. The percentage of intravascular device-related BSI increased significantly from 18.1% during 2003–2007 to 46.6% during 2016–2019 ( q = 0 .002). In contrast, the percentage of respiratory tract infection-related BSI significantly decreased from 16.9% during 2003–2007 to 20.5% during 2008–2011 and 1.7% during 2016–2019 ( q = 0 .013 and 0.004, respectively). The severity of BSI also changed between 2003 and 2019. SOFA score significantly decreased from 5.8 ± 0.5 during 2003–2007 to 3.1 ± 0.5 during 2016–2019 ( q = 0 .004), and accordingly, the in-hospital mortality improved from 39.8% during 2003–2007 to 15.5% during 2016–2019 ( q = 0 .015). As in the previous studies, when MRSA strains detected in one or more blood cultures were included, the in-hospital mortality was 24.3% (17/70) during 2012–2015 and 17.0% (16/94) during 2016–2019, with a significantly lower mortality rate during 2016–2019 compared to 2003–2007 ( q = 0 .008). MRSA types based on ST and SCCmec type from 2012 to 2019 The predominant combination of ST and SCC mec types during 2012–2019 was ST8-MRSA-IV (37.6%), followed by ST8-MRSA-I (22.4%), ST5-MRSA-II (18.8%), and CC1-MRSA-IV (9.4%) ( (A)). The changes in the major combinations are shown in (B) and Supplementary Table 2. Although there was no change in the percentage of ST8 between 2012–2015 and 2016–2019, the combination of SCC mec type with ST8 differed between the two periods. The percentage of ST8-MRSA-IV was higher during 2016–2019 (44.8%) than during 2012–2015 (22.2%; q = 0 .056). In contrast, the percentage of ST8-MRSA-I was significantly lower during 2016–2019 (15.5%) than during 2012–2015 (37.0%; q = 0 .048). Patient characteristics and drug-resistance rate in the major MRSA types and Supplementary Table 3 show the patient characteristics for each major combination. No significant differences were observed with respect to the patient characteristics among the major MRSA types. The SOFA score and in-hospital mortality were lower for CC1-MRSA-IV, with 2.3 ± 0.6 and 12.5%, respectively than those of other major MRSA types; however, there was no statistical difference. The drug-resistance rates of the major MRSA types to different antimicrobial agents are shown in and Supplementary Table 3. The levofloxacin resistance rate was significantly lower in ST8-MRSA-IV than in all the other major MRSA types ( q = 0 .002 vs. ST8-MRSA-I and ST5-MRSA-II and q = 0 .032 vs. CC1-MRSA-IV). The erythromycin resistance rate was also significantly lower in ST8-MRSA-IV than in ST8-MRSA-I. In contrast, the clindamycin resistance rate of ST5-MRSA-II was significantly higher than that of all other major MRSA types ( q < 0.001 for all comparisons). CLSI and EUCAST guidelines differ in their protocols for determining minocycline resistance rates. All strains were sensitive to minocycline per the CLSI guidelines; however, when tested according to the EUCAST guidelines, a difference in minocycline resistance rates was observed among the major MRSA types. The minocycline resistance rate was significantly higher in ST8-MRSA-I and ST5-MRSA-II than in ST8-MRSA-IV and CC1-MRSA-IV. There were also differences in the MICs of beta-lactams, for which the CLSI or EUCAST did not set breakpoints for MRSA. The MICs of cefoxitin, cefazolin, imipenem, and meropenem were lower for ST8-MRSA-IV and CC1-MRSA-IV than for ST8-MRSA-I and ST5-MRSA-II ( (A)). Especially, the MICs of imipenem against most strains belonging to the ST8-MRSA-IV and CC1-MRSA-IV were less than 0.5, in contrast to those of the ST8-MRSA-I and ST5-MRSA-II, which were higher than 16 ( (A)). Molecular characteristics of the strains in the major MRSA types Drug resistance and virulence genes differed among the major MRSA types. Although ST8-MRSA-IV and ST8-MRSA-I belonged to the same sequence type, the positivity rates for drug-resistance genes, such as aadD, ant(9)-Ia, erm(A), tet(M), and bleO, in ST8-MRSA-I were significantly higher than those in ST8-MRSA-IV ( (B)). ST8-MRSA-I showed a similar trend with respect to its drug-resistance genes as that demonstrated by ST5-MRSA-II, except for aac(6 ′ )-aph(2 ′′). However, the positivity rates for the virulence genes differed between ST8-MRSA-I and ST5-MRSA-II. The positivity rates for sec , seg , sei , sel , sem , seo , sep , seu , and tst were significantly higher in ST5-MRSA-II than in ST8-MRSA-I. In contrast, the positivity rate for splE was significantly lower in ST5-MRSA-II than in ST8-MRSA-I ( (C)). CC1-MRSA-IV has characteristic virulence gene features. It harbors virulence genes such as sea , seh , sek , and seq, which are absent in other major MRSA types. In ST8-MRSA-IV, the positivity rates for drug-resistance and virulence genes, such as aac(6 ′ )-aph(2 ′′ ) , aadD , ant(9)-Ia , erm(A) , bleO , sec , sel , sep , tst , and splE , were around 50% ( (B) and (C)). Phylogenetic analysis based on core genome MLST revealed that ST8-MRSA-IV was divided into several clusters . Based on the results of the virulence genes sec and tst and the spa type, 14 and 4 strains were classified as CA-MRSA/J and t5071-ST8-MRSA-IV, respectively. There were significant differences between CA-MRSA/J and t5071-ST8-MRSA-IV gene expression (Supplementary Table 4). The positive rates for aac(6 ′ )-aph(2 ′′ ), aadD, bleO, sec, sel, and tst were significantly higher in CA-MRSA/J than in t5071-ST8-MRSA-IV ( p < 0.05), and those for ant(9)-Ia, erm(A), splE, and sep were significantly lower in CA-MRSA/J than in t5071-ST8-MRSA-IV ( p < 0.05). Based on antimicrobial susceptibility testing, CA-MRSA/J was more sensitive to levofloxacin, erythromycin, clindamycin, and minocycline than was t5071-ST8-MRSA-IV ( p < 0.05) (Supplementary Table 4). Relationship between the strains detected in Nagasaki and those circulating in Japan Phylogenetic tree analysis based on MLST, cgMLST, and S. aureus accessory sequences was performed to investigate the relationship between the strains detected in this study and those from the previous nationwide surveillance. Based on the phylogenetic tree analysis, the strains were divided into three major clusters, ST8-MRSA-IV and ST8-MRSA-I, CC1-MRSA-IV, and ST5-MRSA-II (Supplementary Figure 2). In the ST8-MRSA-IV and ST8-MRSA-I clusters ( (A)), most ST8-MRSA-I strains detected in Nagasaki and the nationwide surveillance were classified into subcluster 3, whereas ST8-MRSA-IV was divided into ten subgroups. However, in the ST8-MRSA-IV group, all CA-MRSA/J strains detected in Nagasaki and nationwide were classified into subcluster 1. All t5071-ST8-MRSA-IV strains detected in Nagasaki and nationwide were classified into subcluster 5. The ST5-MRSA-II strains showed the same trend as that of the ST8-MRSA-IV strains. The ST5-MRSA-II strains detected in Nagasaki were divided into five subclusters ( (C)). In contrast to ST8-MRSA-IV and ST5-MRSA-II, most CC1-MRSA-IV strains detected in Nagasaki and nationwide surveillance were classified into the same sub-cluster 12 ( (B)). Twenty-seven and fifty-eight MRSA strains were isolated during 2012–2015 and 2016–2019, respectively. The percentages of SCC mec types I, II, and IV during 2012–2015 were 37.0%, 37.0%, and 25.9%, respectively. The percentages of SCC mec types I, II, IV, and V during 2016–2019 were 15.5%, 15.5%, 65.5%, and 3.4%, respectively. and Supplementary Table 1 show the changes in SCC mec type from 2003 to 2019. SCC mec type II was the most frequently detected type from 2003 to 2015. However, its frequency decreased significantly from 79.2% during 2003–2007 to 15.5% during 2016–2019 ( q < 0.001). In contrast, the percentage of SCC mec type IV increased dramatically from 18.2% during 2003–2007 to 65.5% during 2016–2019 ( q < 0.001). SCC mec type I showed changes that differed from those of SCC mec types II and IV. The percentage of SCC mec type I cases increased significantly from 2.6% during 2003–2007 to 37.0% during 2012–2015 ( q < 0.001). SCC mec type I was one of the most frequently detected SCC mec types from 2012 to 2015; however, its prevalence decreased by more than 50% (15.5%) during 2016–2019. and Supplementary Table 1 show the characteristics of patients in each study period. There were no significant differences in the patients’ backgrounds, such as age, sex, or underlying diseases, except for the classification of infection; the percentage of healthcare-associated infections was significantly higher during 2012–2015 than during 2008–2011 (3.6% vs. 22.2%, q = 0 .020). Further, the sources of MRSA infection changed from 2003 to 2019. The percentage of intravascular device-related BSI increased significantly from 18.1% during 2003–2007 to 46.6% during 2016–2019 ( q = 0 .002). In contrast, the percentage of respiratory tract infection-related BSI significantly decreased from 16.9% during 2003–2007 to 20.5% during 2008–2011 and 1.7% during 2016–2019 ( q = 0 .013 and 0.004, respectively). The severity of BSI also changed between 2003 and 2019. SOFA score significantly decreased from 5.8 ± 0.5 during 2003–2007 to 3.1 ± 0.5 during 2016–2019 ( q = 0 .004), and accordingly, the in-hospital mortality improved from 39.8% during 2003–2007 to 15.5% during 2016–2019 ( q = 0 .015). As in the previous studies, when MRSA strains detected in one or more blood cultures were included, the in-hospital mortality was 24.3% (17/70) during 2012–2015 and 17.0% (16/94) during 2016–2019, with a significantly lower mortality rate during 2016–2019 compared to 2003–2007 ( q = 0 .008). The predominant combination of ST and SCC mec types during 2012–2019 was ST8-MRSA-IV (37.6%), followed by ST8-MRSA-I (22.4%), ST5-MRSA-II (18.8%), and CC1-MRSA-IV (9.4%) ( (A)). The changes in the major combinations are shown in (B) and Supplementary Table 2. Although there was no change in the percentage of ST8 between 2012–2015 and 2016–2019, the combination of SCC mec type with ST8 differed between the two periods. The percentage of ST8-MRSA-IV was higher during 2016–2019 (44.8%) than during 2012–2015 (22.2%; q = 0 .056). In contrast, the percentage of ST8-MRSA-I was significantly lower during 2016–2019 (15.5%) than during 2012–2015 (37.0%; q = 0 .048). and Supplementary Table 3 show the patient characteristics for each major combination. No significant differences were observed with respect to the patient characteristics among the major MRSA types. The SOFA score and in-hospital mortality were lower for CC1-MRSA-IV, with 2.3 ± 0.6 and 12.5%, respectively than those of other major MRSA types; however, there was no statistical difference. The drug-resistance rates of the major MRSA types to different antimicrobial agents are shown in and Supplementary Table 3. The levofloxacin resistance rate was significantly lower in ST8-MRSA-IV than in all the other major MRSA types ( q = 0 .002 vs. ST8-MRSA-I and ST5-MRSA-II and q = 0 .032 vs. CC1-MRSA-IV). The erythromycin resistance rate was also significantly lower in ST8-MRSA-IV than in ST8-MRSA-I. In contrast, the clindamycin resistance rate of ST5-MRSA-II was significantly higher than that of all other major MRSA types ( q < 0.001 for all comparisons). CLSI and EUCAST guidelines differ in their protocols for determining minocycline resistance rates. All strains were sensitive to minocycline per the CLSI guidelines; however, when tested according to the EUCAST guidelines, a difference in minocycline resistance rates was observed among the major MRSA types. The minocycline resistance rate was significantly higher in ST8-MRSA-I and ST5-MRSA-II than in ST8-MRSA-IV and CC1-MRSA-IV. There were also differences in the MICs of beta-lactams, for which the CLSI or EUCAST did not set breakpoints for MRSA. The MICs of cefoxitin, cefazolin, imipenem, and meropenem were lower for ST8-MRSA-IV and CC1-MRSA-IV than for ST8-MRSA-I and ST5-MRSA-II ( (A)). Especially, the MICs of imipenem against most strains belonging to the ST8-MRSA-IV and CC1-MRSA-IV were less than 0.5, in contrast to those of the ST8-MRSA-I and ST5-MRSA-II, which were higher than 16 ( (A)). Drug resistance and virulence genes differed among the major MRSA types. Although ST8-MRSA-IV and ST8-MRSA-I belonged to the same sequence type, the positivity rates for drug-resistance genes, such as aadD, ant(9)-Ia, erm(A), tet(M), and bleO, in ST8-MRSA-I were significantly higher than those in ST8-MRSA-IV ( (B)). ST8-MRSA-I showed a similar trend with respect to its drug-resistance genes as that demonstrated by ST5-MRSA-II, except for aac(6 ′ )-aph(2 ′′). However, the positivity rates for the virulence genes differed between ST8-MRSA-I and ST5-MRSA-II. The positivity rates for sec , seg , sei , sel , sem , seo , sep , seu , and tst were significantly higher in ST5-MRSA-II than in ST8-MRSA-I. In contrast, the positivity rate for splE was significantly lower in ST5-MRSA-II than in ST8-MRSA-I ( (C)). CC1-MRSA-IV has characteristic virulence gene features. It harbors virulence genes such as sea , seh , sek , and seq, which are absent in other major MRSA types. In ST8-MRSA-IV, the positivity rates for drug-resistance and virulence genes, such as aac(6 ′ )-aph(2 ′′ ) , aadD , ant(9)-Ia , erm(A) , bleO , sec , sel , sep , tst , and splE , were around 50% ( (B) and (C)). Phylogenetic analysis based on core genome MLST revealed that ST8-MRSA-IV was divided into several clusters . Based on the results of the virulence genes sec and tst and the spa type, 14 and 4 strains were classified as CA-MRSA/J and t5071-ST8-MRSA-IV, respectively. There were significant differences between CA-MRSA/J and t5071-ST8-MRSA-IV gene expression (Supplementary Table 4). The positive rates for aac(6 ′ )-aph(2 ′′ ), aadD, bleO, sec, sel, and tst were significantly higher in CA-MRSA/J than in t5071-ST8-MRSA-IV ( p < 0.05), and those for ant(9)-Ia, erm(A), splE, and sep were significantly lower in CA-MRSA/J than in t5071-ST8-MRSA-IV ( p < 0.05). Based on antimicrobial susceptibility testing, CA-MRSA/J was more sensitive to levofloxacin, erythromycin, clindamycin, and minocycline than was t5071-ST8-MRSA-IV ( p < 0.05) (Supplementary Table 4). Phylogenetic tree analysis based on MLST, cgMLST, and S. aureus accessory sequences was performed to investigate the relationship between the strains detected in this study and those from the previous nationwide surveillance. Based on the phylogenetic tree analysis, the strains were divided into three major clusters, ST8-MRSA-IV and ST8-MRSA-I, CC1-MRSA-IV, and ST5-MRSA-II (Supplementary Figure 2). In the ST8-MRSA-IV and ST8-MRSA-I clusters ( (A)), most ST8-MRSA-I strains detected in Nagasaki and the nationwide surveillance were classified into subcluster 3, whereas ST8-MRSA-IV was divided into ten subgroups. However, in the ST8-MRSA-IV group, all CA-MRSA/J strains detected in Nagasaki and nationwide were classified into subcluster 1. All t5071-ST8-MRSA-IV strains detected in Nagasaki and nationwide were classified into subcluster 5. The ST5-MRSA-II strains showed the same trend as that of the ST8-MRSA-IV strains. The ST5-MRSA-II strains detected in Nagasaki were divided into five subclusters ( (C)). In contrast to ST8-MRSA-IV and ST5-MRSA-II, most CC1-MRSA-IV strains detected in Nagasaki and nationwide surveillance were classified into the same sub-cluster 12 ( (B)). This study revealed that SCC mec type IV has quickly become the predominant SCC mec type and replaced SCC mec type II since the mid-2010s in patients with BSIs. In the United States, SCC mec type IV has increased prevalence and replaced SCC mec type II in BSIs since the late 2000s. Based on the results from the two nationwide surveillance conducted in Japan, the prevalence of SCC mec type IV increased from 19.9% in 2011 to 77.4% in 2019, whereas that of SCC mec type II decreased to 7.8% in 2019 from 75.6% in 2011. In addition, according to a previous Japanese study conducted in the Kanto region, the prevalence of SCC mec type IV was higher than that of SCC mec type II in 2016, which is similar to the results of this study. These results indicate that the shift from SCC mec type II to IV occurred almost simultaneously throughout Japan in the mid-2010s. Although SCC mec type IV is the predominant SCC mec type in both the United States and Japan, its molecular characteristics differ. The USA300-like clone (t008-ST8-MRSA-IV) was the most common MRSA type in the United States. The USA300-like clone carries Panton-Valentine leucocidin (PVL) and arginine catabolic mobile element (ACME) genes, contributing to its superior virulence and adaptability compared to ST5-MRSA-II, making it the predominant MRSA clone in the United States. However, the percentage of the USA300-like clone was only 3.3% in Japan in 2019, which is similar to the results of this study. In both the Japanese nationwide study and our study, ST8-MRSA-IV and CC1-MRSA-IV without PVL and ACME replaced ST5-MRSA-II. These clones have fewer virulence genes and better antimicrobial susceptibility than ST5-MRSA-II. However, a previous study reported that SCCmec type IV strains isolated in Japan have a higher plasma-biofilm formation ability than SCCmec type II strains. In addition, the positive rate of splE , a gene associated with the interaction with host proteins during infection, was significantly higher in ST8-MRSA-IV and CC1-MRSA-IV than in ST5-MRSA-II in our study. Thus, the increase in SCCmec type IV in Japan is likely driven by different mechanisms compared to the one observed in the USA300-like clone. The emergence and spread of SCC mec type IV clones are not confined to Japan and the United States. In other parts of Asia, distinct SCCmec type IV MRSA clones have also been identified. For instance, in South Korea, by 2018, ST72-MRSA-IV replaced the previously dominant clone ST5-MRSA-II and ST239-MRSA-III. Similarly, in southern China, ST59-MRSA-IV was the most common MRSA clone in pediatrics. Additionally, in the Philippines, a country in Southeast Asia, ST30 has been reported as the dominant MRSA strain. These findings suggest that the evolution and dissemination of SCC mec type IV MRSA clones vary significantly across the country, driven by local epidemiological and ecological factors. Meanwhile, recent reports from Taiwan indicate an increasing prevalence of USA300-like clones, highlighting the need for close monitoring of future trends. We also investigated the changes in patient characteristics. Although patient backgrounds did not change between 2003 and 2019, SOFA scores and in-hospital mortality improved significantly. In addition, the source of MRSA infection has changed. The percentage of patients with intravascular device-related BSI was much higher during 2016–2019 than during 2003–2007, whereas that of patients with respiratory tract infections was much lower during 2016–2019 than during 2003–2007. The increased prevalence of intravascular device-related BSIs may partially be explained by the higher plasma-biofilm forming ability of SCCmec type IV strains, as reported in previous studies. Biofilm formation enhances bacterial adhesion to medical devices, which could facilitate colonization and subsequent bloodstream infections. While our study did not directly assess biofilm formation, this characteristic likely played a role in the observed trend. We also compared the patient characteristics among the major MRSA types. The SOFA score and in-hospital mortality in patients with the ST5-MRSA-II were higher than those with ST8-MRSA-IV, ST8-MRSA-I, or CC1-MRSA-IV types. ST5-MRSA-II exhibited a high prevalence of several toxins, such as sec, seg, sei, sem, sen, seo , seu, and tst . In addition, the percentage of patients with respiratory tract infections was higher in ST5-MRSA-II than in ST8-MRSA-IV, ST8-MRSA-I, and CC1-MRSA-IV. In the S. aureus BSIs, pneumonia was reported as an independent factor associated with death. These factors likely contributed to the higher SOFA score and in-hospital mortality in patients with the ST5-MRSA-II compared to other MRSA types. In contrast, the SOFA score and in-hospital mortality in patients with CC1-MRSA-IV were lower than those with other MRSA types. Because the prevalence of ST5-MRSA-II decreased and those of ST8-MRSA-IV and CC1-MRSA-IV increased from 2003 to 2019, and there was no change in patient background, the changes in MRSA types may have reduced the severity and in-hospital mortality. CA-MRSA/J, t5071-ST8-MRSA-IV, and CC1-MRSA-IV, the major MRSA types in Japan in 2019, were also detected in this study and classified into distinct clusters in the phylogenetic analysis. Furthermore, the strains detected in this study were classified into the same subcluster as those detected in the Japanese nationwide surveillance. In addition, The drug susceptibility characteristic trends in this study were similar to those in the national surveillance for most cases. The following differences among the three major types were observed in both studies. CA-MRSA/J was sensitive to levofloxacin; CA-MRSA/J and CC1-MRSA-IV were sensitive to clindamycin, while t5071-ST8-MRSA-IV was resistant; for minocycline, all types were sensitive according to CLSI criteria, whereas t5071-ST8-MRSA-IV had a very high rate of resistance according to the EUCAST criteria. The positivity rates for drug-resistance genes showed the same trends in both studies. Almost all CC1-MRSA-IV and t5071-ST8-MRSA-IV harbored ant(9)-Ia and erm (A). In contrast, the positivity rates for aac(6′)-aph(2′′) and aadD were much higher in CA-MRSA/J than in other types. The positivity rates for virulence genes showed the same trend in both studies. The virulence genes sea, seh, sek, and seq were harbored by the CC1-MRSA-IV but not the other two types. Similarly, CA-MRSA/J, but not the other two types, harbored sec, sel, and tst . In contrast, CA-MRSA/J did not harbor splE . These results suggest that CA-MRSA/J, t5071-ST8-MRSA-IV, and CC1-MRSA-IV have spread across the nation rapidly with the same molecular background and characteristics. However, there were several differences in the percentage of CA-MRSA/J, t-571-ST8-MRSA-IV, and CC1-MRSA-IV between this study and Japanese nationwide surveillance. In this study, although CA-MRSA/J was the predominant strain of ST8-MRSA-IV, t5071-ST8-MRSA-IV was not frequently detected. CA-MRSA/J harbors the superantigenic toxin-encoding S. aureus pathogenicity island (SaPI), which includes sec, tst , and sel genes. This clone has been widely reported in Japan since the 2000s, primarily linked to skin infections. In contrast, t5071-ST8-MRSA-IV was first identified as a spreading strain in Japan in the previous nationwide surveillance and was only detected only after 2016 in this study. A similar phenomenon was also observed in CC1-MRSA-IV, detected only after 2016. Although there were regional differences in the ratios of CC1-MRSA-IV and ST8-MRSA-IV in Japan, the percentage of ST8-MRSA-IV was much higher than that of CC1-MRSA-IV in western Japan, including the Kyushu region, where Nagasaki belongs. However, in a single-center study conducted in the Kyushu region during 2018–2019, the percentage of CC1-MRSA-IV was 14.2% in blood cultures, whereas it was 59.0% and 77.3% in sputum and skin and soft tissues, respectively. In addition, a previous study conducted in Hokkaido reported a rapidly increasing percentage of CC1-MRSA-IV in BSIs in 2019. In addition, a study on MRSA bloodstream infections in the Kyushu region reported a notable increase in CC1-MRSA-IV after 2016, with a marked rise especially after 2019. Both t5071-ST8-MRSA-IV and CC1-MRSA-IV had a high positive rate for splE gene, distinguishing them from CA-MRSA/J, which lacked splE similar to ST5-MRSA-II. Since both t5071-ST8-MRSA-IV and CC1-MRSA-IV were detected only after 2016 in this study, and the strains within each clone were highly similar in MST analysis, it is possible that we only observed the early stages of these clones’ circulation in our region. The unique presence of the splE gene among t5071-ST8-MRSA-IV and CC1-MRSA-IV isolates may have contributed to their rising prevalence after 2016. Previous study suggests that splE enhances interactions with host proteins, potentially improving colonization and infection efficiency. While our study did not directly investigate the functional role of splE, its presence likely provided a selective advantage, facilitating the dissemination of these clones. The present study also revealed that a different epidemic strain was prevalent in our hospital compared to those identified in the nationwide surveillance. In our hospital, the prevalence of SCC mec type I increased from 2003 to 2015 and became the predominant MRSA type during 2012–2015. Unlike the ST8-MRSA-IV, ST8-MRSA-I showed a similar pattern to ST5-MRSA-II in drug resistance. The difference from ST5-MRSA-II was that ST8-MRSA-I was more sensitive to clindamycin and exhibited higher positivity rates for aac(6′)-aph(2 ′ ′) and aadD. In contrast, their virulence genes were completely different, with ST8-MRSA-I having markedly fewer virulence factors than ST5-MRSA-II. In virulence genes, ST8-MRSA-I showed a trend similar to that of t5071-ST8-MRSA-IV; however, sed, sej, and ser were detected only in ST8-MRSA-I. In the nationwide surveillance study, four ST8-MRSA-I strains were detected in the Kyushu region. In a previous single-center study in the Kyushu region, ST8-MRSA-I was the second most frequently detected type during 2018–2019. Several studies in the Kyushu region detected SCC mec type I strains at certain rates: 8.9% during 2005–2011, 14.5% during 2014–2015, and 9.3% during 2019–2020. These results indicated that ST8-MRSA-I spread independently in the Kyushu region. In this study, the prevalence of ST8-MRSA-I decreased from 37.0% during 2012–2015 to 15.5% during 2016–2019. A similar trend was observed in a study on MRSA BSI in the Kyushu region, where the proportion of SCCmec type I decreased from 32% in 2013–2015 to 10% in 2016–2018 and further dropped to 6% in 2019–2021. As previously mentioned, ST8-MRSA-I exhibited high susceptibility to clindamycin; however, the Antimicrobial Use Density (AUD) for clindamycin at our hospital remained relatively stable, at 0.32 in 2012 and 0.29 in 2019, indicating no significant increase. While ST8-MRSA-I frequently harbors the resistance genes aac(6’)-aph(2’’) and aadD , these genes are rarely found in CC1-MRSA-IV. At our hospital, the AUD for aminoglycosides decreased significantly from 0.45 in 2012 to 0.14 in 2019, suggesting that reduced aminoglycoside use may have contributed to the replacement of the endemic ST8-MRSA-I by SCCmec type IV strains, such as CC1-MRSA-IV, in the Kyushu region. This study has several limitations. First, because this was a single-center study, it is unclear whether the differences in patient characteristics for each MRSA type can be generalized. Second, to reduce the likelihood of contamination, we included only cases that were positive for at least two sets of blood cultures. This inclusion criterion is the same as that used in the nationwide surveillance but different from our previous studies. Under the previous criteria, in-hospital mortality during 2012–2015 would have been lower than the current study (24.3% vs 39.8%), while it was similar for 2016–2019 (17.0% vs 15.5%). Thus, we believe the observed mortality reduction is not due to the change in the inclusion criteria. In contrast, the previous inclusion criteria would have yielded 70 strains for 2012–2015 and 94 strains for 2016–2019, indicating a significant reduction in the number of strains analyzed in 2012–2015 due to the criteria change. However, since the decision to collect one or two sets of blood cultures was left to the clinicians’ discretion, we believe this decision is unlikely to have influenced the distribution of MRSA clones. Third, in this study, we aimed to investigate the changes at our institution up to the year of the nationwide surveillance conducted in Japan, which is why we limited our analysis to strains collected until 2019. Further changes in MRSA clones may have occurred since 2020, influenced by the COVID-19 pandemic and other factors. Therefore, ongoing surveillance and further investigation will be necessary to monitor these potential shifts. Fourth, there were several strains for which the spa type was not determined or excluded from the phylogenetic analysis: 22 of 85 strains (25.9%) and 10 of 75 strains (13.3%), respectively. This is a limitation that also holds for nationwide surveillance. Fourth, neither this study nor the previous nationwide surveillance included evolutionary phylogenetic analysis to estimate when the major MRSA clones emerged. Future studies incorporating more extensive genomic data from across Japan are needed to address this limitation. Lastly, while we hypothesize that the higher plasma-biofilm forming ability of SCC mec type IV strains and the presence of the splE gene in t5071-ST8-MRSA-IV and CC1-MRSA-IV isolates may have contributed to the observed trends, these factors were not directly assessed in this study. Further experimental studies are required to validate these associations. In conclusion, this study demonstrated that the major MRSA types in BSIs changed over time and were associated with reduced disease severity and in-hospital mortality. In addition, changes in the major MRSA types were directly influenced by changes in the circulating strains nationally and regionally. Table S1.pdf Fig S1.tiff Table S5.pdf Table S3.pdf Table S4.pdf Table S2.pdf Fig S2.tiff |
A Step Forward in Understanding the Expression of Classical Aquaporins in the Male Reproductive Tract: Study Findings in Cattle ( | bac66d63-50eb-477c-a4cf-2f6deb9ecac8 | 11276675 | Anatomy[mh] | It is widely known that effective transport of water and solute in the individual segments of the male reproductive tract is a prerequisite for maintaining a unique microenvironment for a multi-step process of spermatogenesis, and the following concentration, maturation and storage of spermatozoa. Fluid movement already occurs within the testes, where Sertoli cells residing on the basal membrane of seminiferous tubules regulate the composition of tubular fluid. This fluid serves as the environment for spermatozoa development and facilitates their transport from the testis through the rete testis, efferent ducts, and epididymis and into the vas deferens . A portion of the seminiferous tubular fluid also originates from differentiating germ cells. In fact, approximately 70% of the cell volume is osmotically eliminated from the cytoplasm during differentiation of round spermatids into elongated spermatids . The composition of the fluid formed in the seminiferous tubules undergoes gradual modifications through concurrent processes of secretion and resorption of epithelial cells in successive segments of the male reproductive tract . Undoubtedly, efficient fluid movement is facilitated by water channels located in male reproductive organs, also known as aquaporins (AQPs). AQPs belong to a family of small, hydrophobic, transmembrane proteins that facilitate the transport of water and a wide range of other substances, including glycerol, urea, carbon dioxide, ammonia, and hydrogen peroxide. In mammals, 13 members of the AQP family (AQP0-AQP12), located in many different cell types throughout the body, have been discovered thus far. The orthodox aquaporins, also referred to as classical aquaporins, constitute the largest subfamily of AQPs, comprising six members: AQP0, AQP1, AQP2, AQP4, AQP5, and AQP6 . These proteins are considered to be primarily selective for water, although they may also facilitate transport of other small molecules. The exception is AQP2, which is selectively permeable solely to water . The studies carried out to date, mainly in laboratory animals, have shown that at least five classical aquaporins, i.e., AQP0, AQP1, AQP2, AQP4 and AQP5, are present in the male reproductive system. AQP0 has been identified in rat Leydig and Sertoli cells . AQP1 has been localized in the rete testis, efferent ducts, and epididymis in mice, while AQP2 has been observed in the vas deferens in mice and rats . The presence of AQP4 has been found in the rat Sertoli cells, while AQP5 has been observed in the corpus and cauda regions of the rat epididymis . Our previous study demonstrated that the expression and distribution patterns of aquaglyceroporins (AQP3, AQP7, and AQP9) within the reproductive organs of bulls varied in a tissue-specific and age-dependent manner . Based on the results, it was concluded, among others, that AQP3 and AQP7 might play a significant role in the migration and proliferation of gonocytes, while all three aquaglyceroporins might be involved in the formation of microenviroment in the lumen of the epididymis. The present work is a continuation of our previous research in this area. So far, among the classical AQPs in cattle, only AQP1 in the epididymis and vas deferens of mature buffalo bulls has been analyzed . In the context of the increase in male fertility disorders observed in recent years, the analysis of the location and expression of individual AQPs within the reproductive organs not only arouses great interest but also creates new and broad possibilities associated with the prediction of reproductive potential. Therefore, the aim of the present study was to determine the location of all classical aquaporins in the bovine male reproductive tract and analyze changes in their expression with age. Achieving this objective will enable testing the research hypothesis that classical aquaporins play a crucial role in maintaining fluid homeostasis within the male reproductive tract in cattle. 2.1. Immunolocalization of Classical Aquaporins in Bovine Male Reproductive Tract In the bovine male reproductive system, five of the six classical AQPs have been observed, namely AQP0, AQP1, AQP4, AQP5 and AQP6 ( , and ). The presence of AQP2 in reproductive organs was excluded in all animals studied. The semi-quantitative assessment of the total labeling intensity of identified AQPs was carried out using the adopted scale and is presented in . Within the testis, AQP0 and AQP1 were observed in the interstitial tissue across all animal groups. Weak expression of AQP0 was detected in the cytoplasm and cell membrane of Leydig cells ( A–C). It should be noted that this immunostaining pattern was observed only in half of the animals in each experimental group. AQP1 was visible in the endothelial cells of blood vessels ( G–I). AQP0 and AQP1 were not detected in the germ cells of the seminiferous tubules or in the Sertoli cells of calves, young and reproductive bulls. In sexually mature animals, weak to moderate immunoexpression of AQP1 was observed in the plasma membrane of peritubular myoid cells surrounding the seminiferous tubules ( I). In the examined animals, immunoreactivity of AQP0, AQP1, and AQP6 was detected in the rete testis and efferent ducts. AQP0 and AQP6 were found in the cells lining the rete testis and efferent ducts ( D–F,K–M). These AQPs were predominantly localized in the cytoplasm and less frequently in the cell membrane. It should be emphasized that AQP0 was present exclusively in individuals where its expression was observed in Leydig cells. AQP1 was visible in blood vessels surrounding the rete testis and efferent ducts ( J). AQP6 expression in reproductively mature bulls in both of these structures was higher compared to sexually immature animals. In the epididymis, AQP0, AQP1, AQP4, AQP5 and AQP6 were observed. AQP0 was present in the caput and corpus epididymis. In sexually immature animals, weak immunoexpression of this protein was recorded in the cytoplasm and plasma membrane (especially in its apical part) of the epithelial cells ( A,B). In reproductive bulls, AQP0 was observed in the cytoplasm and plasma membrane of basal and principal cells ( C,D). In both the caput and corpus epididymis, the abundance of AQP0 was greater in the basal cells compared to the principal cells. In all animals studied, AQP0 was also visible in blood vessels ( A–D). No AQP0 staining was detected in the cauda epididymis. In both sexually immature and mature animals, AQP1 was present in the initial segment of the caput epididymis ( E–G). This protein was observed at the apical surface of the epithelial cells. No AQP1 immunoreactivity was detected in the epithelium of the corpus and cauda epididymis. Along the entire epididymal duct, AQP1 was visible in the endothelium of blood vessels ( E,H,I). In calves and young bulls, weak to moderate expression of AQP4, AQP5, and AQP6 was predominantly observed in the cytoplasm, occasionally in the apical cell membrane of epithelial cells from the caput to the cauda of the epididymis ( J,N,O,R–T). In sexually mature individuals, AQP4, AQP5, and AQP6 were present in the cytoplasm and plasma membrane of principal and basal cells ( K–M,P,Q,U). In some cross-sections, AQP4, AQP5, and AQP6 were most abundant in the apical part of the epididymal epithelium ( L,P,Q,U). Immunoperoxidase labeling of AQP4, AQP5 and AQP6 was also detected in the stereocilia of principal cells ( K–M,P,Q,U). In reproductive bulls, the expression of AQP4 and AQP6 in the caput, corpus, and cauda epididymis was stronger than in calves and young bulls in the corresponding sections. AQP5 immunostaining decreased in the caput epididymis and increased in the corpus and cauda epididymis with growth and development of animals. No classical AQP was found in the epididymal sperm. In the vas deferens, AQP1, AQP4, AQP5 and AQP6 expression was detected in all animals across the tested age groups. AQP1 was present in the blood vessels located around the vas deferens ( A,B), while AQP4, AQP5 and AQP6 were visible in the cytoplasm and plasma membrane of both principal and basal cells ( C–K). In young and reproductive bulls, AQP5 was also observed in the stereocilia of principal cells ( G,H). In the vas deferens, the immunoexpression of AQP4, AQP5 and AQP6 increased with the growth and development of the animals. 2.2. Immunoblotting of Classical Aquaporins in Bovine Male Reproductive System AQP0, AQP1, AQP2, and AQP5 were identified in the collected research material using the Western blot method . Control samples for individual AQPs confirmed the presence of AQP0 in the bovine lens, AQP1 in the renal cortex, AQP2 in the renal medulla, and AQP5 in the parotid salivary glands and lungs. The presence of AQP0 and AQP5 was confirmed in the male reproductive system. AQP0 was detected as a single band at 28 kDa in the testis, as well as caput and corpus epididymis ( A). No AQP0-specific signal was present in the cauda epididymis and vas deferens. AQP1 and AQP2 were only detected in a protein extract isolated from the bovine kidney ( B,C). In all animal age groups, AQP5 was detected as a single distinct band of 30 kDa in three regions of the epididymis (caput, corpus, and cauda) and in the vas deferens ( D). No AQP5 signal was observed in the testis. It was not possible to analyze the expression of bovine AQP4 and AQP6 using Western blotting with commercially available antibodies. In the bovine male reproductive system, five of the six classical AQPs have been observed, namely AQP0, AQP1, AQP4, AQP5 and AQP6 ( , and ). The presence of AQP2 in reproductive organs was excluded in all animals studied. The semi-quantitative assessment of the total labeling intensity of identified AQPs was carried out using the adopted scale and is presented in . Within the testis, AQP0 and AQP1 were observed in the interstitial tissue across all animal groups. Weak expression of AQP0 was detected in the cytoplasm and cell membrane of Leydig cells ( A–C). It should be noted that this immunostaining pattern was observed only in half of the animals in each experimental group. AQP1 was visible in the endothelial cells of blood vessels ( G–I). AQP0 and AQP1 were not detected in the germ cells of the seminiferous tubules or in the Sertoli cells of calves, young and reproductive bulls. In sexually mature animals, weak to moderate immunoexpression of AQP1 was observed in the plasma membrane of peritubular myoid cells surrounding the seminiferous tubules ( I). In the examined animals, immunoreactivity of AQP0, AQP1, and AQP6 was detected in the rete testis and efferent ducts. AQP0 and AQP6 were found in the cells lining the rete testis and efferent ducts ( D–F,K–M). These AQPs were predominantly localized in the cytoplasm and less frequently in the cell membrane. It should be emphasized that AQP0 was present exclusively in individuals where its expression was observed in Leydig cells. AQP1 was visible in blood vessels surrounding the rete testis and efferent ducts ( J). AQP6 expression in reproductively mature bulls in both of these structures was higher compared to sexually immature animals. In the epididymis, AQP0, AQP1, AQP4, AQP5 and AQP6 were observed. AQP0 was present in the caput and corpus epididymis. In sexually immature animals, weak immunoexpression of this protein was recorded in the cytoplasm and plasma membrane (especially in its apical part) of the epithelial cells ( A,B). In reproductive bulls, AQP0 was observed in the cytoplasm and plasma membrane of basal and principal cells ( C,D). In both the caput and corpus epididymis, the abundance of AQP0 was greater in the basal cells compared to the principal cells. In all animals studied, AQP0 was also visible in blood vessels ( A–D). No AQP0 staining was detected in the cauda epididymis. In both sexually immature and mature animals, AQP1 was present in the initial segment of the caput epididymis ( E–G). This protein was observed at the apical surface of the epithelial cells. No AQP1 immunoreactivity was detected in the epithelium of the corpus and cauda epididymis. Along the entire epididymal duct, AQP1 was visible in the endothelium of blood vessels ( E,H,I). In calves and young bulls, weak to moderate expression of AQP4, AQP5, and AQP6 was predominantly observed in the cytoplasm, occasionally in the apical cell membrane of epithelial cells from the caput to the cauda of the epididymis ( J,N,O,R–T). In sexually mature individuals, AQP4, AQP5, and AQP6 were present in the cytoplasm and plasma membrane of principal and basal cells ( K–M,P,Q,U). In some cross-sections, AQP4, AQP5, and AQP6 were most abundant in the apical part of the epididymal epithelium ( L,P,Q,U). Immunoperoxidase labeling of AQP4, AQP5 and AQP6 was also detected in the stereocilia of principal cells ( K–M,P,Q,U). In reproductive bulls, the expression of AQP4 and AQP6 in the caput, corpus, and cauda epididymis was stronger than in calves and young bulls in the corresponding sections. AQP5 immunostaining decreased in the caput epididymis and increased in the corpus and cauda epididymis with growth and development of animals. No classical AQP was found in the epididymal sperm. In the vas deferens, AQP1, AQP4, AQP5 and AQP6 expression was detected in all animals across the tested age groups. AQP1 was present in the blood vessels located around the vas deferens ( A,B), while AQP4, AQP5 and AQP6 were visible in the cytoplasm and plasma membrane of both principal and basal cells ( C–K). In young and reproductive bulls, AQP5 was also observed in the stereocilia of principal cells ( G,H). In the vas deferens, the immunoexpression of AQP4, AQP5 and AQP6 increased with the growth and development of the animals. AQP0, AQP1, AQP2, and AQP5 were identified in the collected research material using the Western blot method . Control samples for individual AQPs confirmed the presence of AQP0 in the bovine lens, AQP1 in the renal cortex, AQP2 in the renal medulla, and AQP5 in the parotid salivary glands and lungs. The presence of AQP0 and AQP5 was confirmed in the male reproductive system. AQP0 was detected as a single band at 28 kDa in the testis, as well as caput and corpus epididymis ( A). No AQP0-specific signal was present in the cauda epididymis and vas deferens. AQP1 and AQP2 were only detected in a protein extract isolated from the bovine kidney ( B,C). In all animal age groups, AQP5 was detected as a single distinct band of 30 kDa in three regions of the epididymis (caput, corpus, and cauda) and in the vas deferens ( D). No AQP5 signal was observed in the testis. It was not possible to analyze the expression of bovine AQP4 and AQP6 using Western blotting with commercially available antibodies. In the present study, the cell- and tissue-specific distribution and expression of classical aquaporins were examined in various segments of the bovine male reproductive tract, along with their changes with growth and development of the animals. The testes are the main reproductive organs in males, responsible for producing spermatozoa and secreting sex hormones, primarily testosterone. Testicular tissue can be divided into two separate compartments, namely interstitial space and seminiferous tubules. Of all the aquaporins analyzed in the interstitial cells of the animals under study, only AQP0 was identified. This protein was detected in Leydig cells, but this distribution pattern was observed only in half of the animals in each experimental group. Published data indicate that AQP0 in Leydig cells has also been identified in other animal species, including horses and rats . Additionally, AQP2 and AQP5 have also been observed in these steroidogenic cells in horses, while AQP1 has been identified in mice . According to many authors, classical aquaporins located in Leydig cells contribute to maintaining water homeostasis between the extracellular and intracellular compartments . Despite standardized study groups and identical conditions of IHC analysis, AQP0 was present only in some animals. Until now, both in our research and in studies by other authors, there have been no observations indicating that a specific aquaporin can be present or absent in certain individuals under physiological conditions in a specific organ. All tested animals were healthy and in good condition. We reanalyzed previously published data regarding AQP7 and AQP9 in Leydig cells in all animals studied , and no relationship was found between the expression of these aquaporins and the occurrence of AQP0. The presence or absence of AQP0 in bovine Leydig cells raises important questions about the potential impact of this protein on cell function. This issue undoubtedly requires further and more detailed studies. AQP1 was also observed in the vascular endothelial cells within the testicular interstitium of all the examined individuals. Strong expression of this protein in blood vessels has been reported in various tissues and organs, such as the kidney, lung, skin, secretory glands, and skeletal muscle . However, AQP1 was detected for the first time in the vasculature of the testes. This distribution suggests a role of this aquaporin in supporting water movement between the blood stream and the interstitial space, related to the functional specificity of the testes. In reproductive bulls, AQP1 was also observed in the cell membrane of peritubular myoid cells, which are the main cellular component of the walls of seminiferous tubules. In 1958, these cells were first discovered in rat testes and described as smooth muscle-like cells . Currently, it is known that myoid cells are present in all mammals, although their organization differs between species . In rats and mice, myoid cells are arranged in a single layer, while in humans, horses, and cattle, they exist in multiple layers . To date, several functions of these cells have been established. One key role is their contractile activity, which is involved in the transport of non-motile spermatozoa and testicular fluid through the seminiferous tubules . Myoid cells may also take part in the regulation of spermatogenesis by producing various hormones, cytokines and growth factors that modulate the function of Sertoli cells . In addition, the peritubular myoid cell layer is an important contributor to the formation of the blood–testis barrier and structural integrity of seminiferous tubules . Considering these facts, it is likely that AQP1 plays a significant role during the contraction of myoid cells, as it may facilitate water exchange between these cells and the interstitium to ensure volume changes during their contraction. Therefore, AQP1 may support the aforementioned expulsion of sperm cells through seminiferous tubules. It is worth noting that the testes of sexually immature animals do not yet produce spermatozoa, which may explain why the presence of this protein in peritubular myoid cells was recorded exclusively in reproductive bulls. In the animal species studied to date, among the classical aquaporins within the epithelium lining the seminiferous tubules, AQP0 and AQP4 have been observed in rat Sertoli cells, and AQP1 in bat spermatids . Surprisingly, none of the aquaporins analyzed were detected in cattle in this structure. However, previous studies have indicated that AQP3 and AQP7 are present in germ cells, while AQP7 is present in Sertoli cells in these animals . The absence of classical aquaporins in the bovine germinal epithelium is possibly compensated by the abundance of the aforementioned aquaglyceroporins. It should be noted that the presence of AQP8 and AQP11 in these cells cannot be excluded, as these aquaporins also facilitate membrane transport of water molecules. After spermatozoa are released into the tubular lumen, the fluid created by Sertoli cells assists in the transport of the immature spermatozoa to the rete testis . The rete testis is a network of small tubules located in the mediastinum of the testis, connecting the seminiferous tubules to the efferent ducts. The efferent ducts are responsible for reabsorbing up to 90% of the testicular luminal fluid, thereby increasing the concentration of spermatozoa and enabling interactions of their surface with the secretory products of the epididymal epithelial cells essential for sperm maturation . In previous studies involving both of these structures, only AQP1 has been identified among the classical aquaporins. This protein was observed in the rete testis in dogs, rats and mice . It is noteworthy that Aqp1 -knockout mice showed only slight alterations in the rete testis as a result of obstruction of fluid reabsorption in the tubule lumen, and these animals were still fertile . In efferent ducts, AQP1 has been reported in rats, mice, dogs, sheep, bats, and buffaloes . In the test animals, AQP0 and AQP6 were observed within the epithelium of the rete testis and efferent ducts. The abundance of AQP0 in both of these structures was relatively low and was only observed in individuals where its presence was detected in Leydig cells. In the present study, it was found that the expression of AQP6 was higher in reproductive bulls. However, there is a lack of data in the available literature regarding the localization of this protein in the male reproductive tract in other animal species, which significantly complicates the interpretation of the results. However, it is known that unlike other aquaporins, AQP6 exhibits low water permeability and facilitates the transport of urea, glycerol, and anionic ions, particularly nitrate . Given this information, it can be assumed that this aquaporin is involved primarily in the rapid movement of small solutes between the lumen and epithelial cells lining the rete testis and efferent ducts. Moreover, AQP1 in cattle has been detected in the endothelial cells of blood vessels in both of these structures. According to Badran and Hermo , the expression of this protein on vascular channels may mediate the removal of water from the intertubular space of these sections of the reproductive system. It is well established that testicular spermatozoa are immature and undergo maturational changes and acquire motility during their passage through the long highly convoluted tubule known as the epididymis . This organ can be divided into three regions—caput, corpus and cauda—each with unique characteristics and function . While previous studies have extensively examined classical aquaporins in the mature epididymis of sexually mature males, our study uniquely investigated their location and expression dynamics during animal growth and development. Of the classical aquaporins observed within the epididymis, AQP1 is currently the most comprehensively characterized. However, Yeste et al. aptly pointed out that the distribution of this protein differs between species. In the epididymis of buffaloes, AQP1 has been observed only in the blood vessels . In sheep, this aquaporin was found in the apical region of epithelial cells in the initial epididymal segment and in the microvilli of principal cells of the caput . In mice, AQP1 has been observed on the membrane of smooth muscle cells surrounding the epididymal duct . Previous studies have also demonstrated that in addition to AQP1, other AQPs, such as AQP0, AQP2, AQP4, and AQP5, are present in the epididymis of various animal species . In the present experiment, five classical aquaporins have been identified in the bovine epididymis. However, it should be noted that not all of them were present along the entire epididymal duct . AQP1 was consistently found on the apical surface of epithelial cells in the initial segment of the caput epididymis and in the endothelium of blood vessels throughout the entire epididymis in all animals studied. The present results confirm earlier reports suggesting that AQP1 is primarily involved in fluid reabsorption in the proximal region of the epididymis . In sexually immature animals, AQP0, AQP4, AQP5, and AQP6 were predominantly observed in the cytoplasm of epithelial cells. Specifically, AQP0 was detected in the caput and corpus epididymis, while AQP4, AQP5, and AQP6 were found in all three regions of the epididymis. The observed increase in their expression with the growth and development of animals indicates a progressive involvement of AQP0, AQP4, AQP5, and AQP6 in the formation of the epididymal microenvironment. In the current study, AQP0 was mainly observed in basal cells of reproductive bulls, and its expression appeared to decrease gradually along the successive segments of the epididymis, being most abundant in the caput, slightly less abundant in the corpus, and completely absent in the cauda. This protein was also reported in the blood vessels of the caput and corpus epididymis, which was consistent with previous observations in horses . In sexually mature animals, AQP4, AQP5, and AQP6 were found in both principal and basal cells throughout the entire epididymis, indicating that these aquaporins participate in the reabsorption of a large amount of fluid within the lumen of this organ. These findings underscore the presence and coordinated function of multiple classical AQPs in various types and regions of the bovine epididymis, and their co-localization suggests their collective role in the transport of water and/or other small solutes to ensure optimal conditions for sperm concentration, maturation, protection, and storage. The vas deferens, a part of the excurrent ducts, also plays a crucial role in transepithelial water reabsorption in the reproductive tract, maintaining the luminal environment essential for the next steps of spermatozoa maturation and subsequent storage . Our findings revealed the absence of AQP0 and AQP2 in cattle in this region. However, AQP1 was found in the endothelial cells of blood vessels, while AQP4, AQP5, and AQP6 were detected in the epithelial cells in all examined animals. AQP1 expression in vessels surrounding the vas deferens was also recorded in buffaloes and dogs . Among classical aquaporins identified in various animal species to date, AQP0 has been documented in the epithelium of the vas deferens in horses, AQP1 in mice and rats, and AQP2 in rats . The presence of AQP1 in the vas deferens endothelium suggests its involvement in regulating fluid movement between vascular and interstitial spaces, similar to its role in other parts of the male reproductive system. However, the detection of three classical aquaporins in the principal and basal cells of the bovine vas deferens indicates significant movement of water and small solutes within this organ. Moreover, the abundance of AQP5 in the stereocilia of principal cells in young and reproductive bulls suggests an additional role of this protein in creating a unique microenvironment in the lumen of the vas deferens. The location of and expression changes in individual AQPs in the male reproductive system are still not sufficiently known. Hence, a number of possibilities and various factors that may modify them should be taken into account. Among others, the increasing environmental pollution deserves attention, especially since male reproductive organs are particularly exposed to it. According to the latest research, environmental pollution has an impact on human semen and germ cells. It causes an increase in the concentration of immature cells, a decrease in the protamine/histone ratio, a reduced ability of sperm nuclear basic proteins, and a change in the copper/zinc ratio in sperm . Marinaro et al. analyzed the effects of exposure to single heavy metals (copper, nickel and cadmium) and their mixture on the reproductive system of marine invertebrate M. galloprovincialis . As a result of the conducted research, the authors of the mentioned studies found conformational alterations of protamine-like proteins that produced changes in their binding to DNA, which in turn resulted in atypical chromatin compaction of spermatozoa. In addition, a structural change in the male gonad was observed. There are no data in the available literature regarding the influence of various pollutants on AQPs; however, this influence cannot be ruled out. Further studies are needed to evaluate whether bioaccumulation of pollutants in the tissues of the male reproductive system affect the expression of AQPs. Based on this and our previous research , a varied map of the location of AQPs was described in the testis. In all groups of animals, AQP0, AQP7 and AQP9 were found in the Leydig cells. Within the seminiferous tubules of immature animals, AQP3 and AQP7 were observed in the gonocytes. In the testicular tissue of the reproductive bulls, AQP1 was detected in the peritubular myoid cells, AQP3 in the spermatogonia and spermatocytes, and AQP7 in all germ cells and Sertoli cells. In the examined animals, AQP0, AQP3, AQP6 and AQP7 were found in the epithelial cells lining the rete testis and efferent ducts. In all studied individuals, AQP1 was visible in the apical surface of epithelial cells in the initial segment of the caput epididymis. In reproductive bulls, AQP0 was found in the caput and corpus epididymis, while AQP3, AQP4, AQP5, AQP6, AQP7 and AQP9 were observed along the entire epididymis. In sexually mature individuals, AQP0 and AQP3 were mainly visible in the basal cells. Immunoexpression of AQP4, AQP5, AQP6 and AQP7 was detected in both principal and basal cells. In turn, AQP9 immunostaining was noted in the stereocilia of the principal cells. In the bovine vas deferens, AQP3, AQP4, AQP5, AQP6, AQP7 and AQP9 were detected. These proteins share a similar labeling pattern in this organ in all animals tested. Mentioned AQPs were identified in the principal and basal cells, wherein immunostaining of AQP5 and AQP9 was also observed in the stereocilia. Within the male reproductive system, AQP1 was also found in the endothelial cells of the blood vessels. Similarly to our previous study, certain methodological challenges were encountered during the experiments . They pertained to the analysis of AQPs using Western blotting. With respect to AQP0 and AQP5, despite the clear detection of these proteins in controls, their expression was very low under the same conditions, even when the total protein quantity in samples from various segments of the male reproductive organs was doubled. Hence, it was not possible to reliably and credibly determine alterations in their expression. A challenge for modern science in the context of identifying and analyzing protein expression, including AQPs, is the search for alternative research methods that would enable their additional verification. 4.1. Animals and Tissue Collection The experiments were conducted on the male reproductive organs obtained from 31 Polish, Holstein-Friesian, Black-and-White bulls ( Bos taurus ). Tissue samples were collected from three age groups of animals: (I) calves aged from 5 to 6 weeks ( n = 10), (II) young bulls aged from 15 to 25 weeks ( n = 10) and (III) reproductive bulls aged from 2 to 6 years ( n = 11). Detailed information regarding the studied animals and their origin were described in a previous work . The bulls from groups II and III were slaughtered according to the standard slaughterhouse procedures. After slaughter, the male gonads were immediately removed and dissected. Representative fragments of each testis, epididymis (caput, corpus and cauda regions) and vas deferens were collected and cut into small uniform pieces. Each fragment of the right reproductive organs was fixed in 10% buffered formalin, processed, embedded in paraffin blocks, cut into 2 μm-thick sections and subjected into immunohistochemical staining. Morphological and morphometric analyses of the studied material were conducted in a previous study . Each representative region of the left reproductive organs was frozen in liquid nitrogen, stored at −80 °C and subsequently used for Western blot analysis. 4.2. Antibodies All antibodies used in the present study and their dilution for immunohistochemistry and Western blot are shown in . The specificity and antigenicity of commercially available antibodies against bovine AQP4, AQP5 and AQP6 have been proven in our previous work . Homology between peptides sequence used for developing antibodies and amino acid sequences of bovine AQP0, AQP1 and AQP2 were defined on the protein blast database ( https://blast.ncbi.nlm.nih.gov/Blast.cgi , accessed on 10 January 2024). Depending on the type of primary antibodies, labeling of the antigen–antibody complexes were visualized with the use of secondary polyclonal goat anti-rabbit or anti-mouse horseradish peroxidase-conjugated antibodies. Traditional methods of negative controls (i.e., omission of primary antibodies) were used in immunohistochemistry and Western blot. 4.3. Immunohistochemistry (IHC) Immunohistochemistry was performed as previously described in detail by Michałek et al. . After blocking endogenous peroxidase activity, sections were subjected to heat-induced antigen retrieval in citrate buffer at pH 6.0 (for AQP6) or in Tris-EGTA-buffer at pH 9.0 (for AQP0, AQP1, AQP2, AQP4 and AQP5). Sections were incubated overnight at 4 °C with the validated primary antibodies against bovine AQP0, AQP1, AQP2, AQP4, AQP5 and AQP6. After washing, samples were incubated for 1 h at room temperature with corresponding secondary antibodies. Immune reactions were visualized using 3.3′-diaminobenzidine (DAB) chromogen solution (Dako, Glostrup, Denmark, cat. no. K3468). All sections were analyzed using an Olympus BX43 light microscope (Olympus, Hamburg, Germany). Specificity of immunostaining was confirmed by following the above procedures, except that the primary antibodies were replaced with the same amount of IgG from bovine serum (Sigma Aldrich, Darmstad, Germany, cat. no. I5506; negative controls). In order to confirm the antigenicity of anti-AQP0, -AQP1 and -AQP2, positive controls with the use of bovine lens (for AQP0) and kidney (for AQP1 and AQP2) were additionally performed. The immunohistochemical reactions were assessed by semiquantitative scoring based on labelling intensities as follows: no expression = “−”; weak = “+”; moderate = “++”; strong = “+++”. 4.4. Western Blot (WB) The procedure was performed as described previously by Michałek and Grabowska . Protein samples were separated on 12% Criterion TGX Stain-Free gels (Bio-Rad, Hercules, CA, USA; cat. no. 5678045) and transferred to PVDF membranes. After blocking, membranes were incubated overnight at 4 °C with primary antibodies against AQP0, AQP1, AQP2, AQP4 AQP5 and AQP6. The following day, membranes were washed and incubated at room temperature for 1 h with the corresponding secondary antibodies. Protein bands were visualized using an enhanced chemiluminescence system (Clarity TM Western ECL Substrate, Bio-Rad, Hercules, CA, USA, cat. No. 170-5061), and blot images were acquired in a ChemiDoc MP imaging system (Bio-Rad, Hercules, CA, USA). For positive controls, bovine protein extracts were used, i.e., lens (for AQP0), renal cortex (for AQP1 and AQP4), renal medulla (for AQP2 and AQP6) and parotid salivary glands and lungs (for AQP5). The obtained images were recorded in a digital form and modified (autoscaling was applied and a representative band was cut out) using CorelDRAW (version 21.3.0.755, Corel Corporation, Ottawa, ON, Canada). Due to the low signal of the detected bands for AQP0 and AQP5, quantitative analysis of their optical density was not performed. The experiments were conducted on the male reproductive organs obtained from 31 Polish, Holstein-Friesian, Black-and-White bulls ( Bos taurus ). Tissue samples were collected from three age groups of animals: (I) calves aged from 5 to 6 weeks ( n = 10), (II) young bulls aged from 15 to 25 weeks ( n = 10) and (III) reproductive bulls aged from 2 to 6 years ( n = 11). Detailed information regarding the studied animals and their origin were described in a previous work . The bulls from groups II and III were slaughtered according to the standard slaughterhouse procedures. After slaughter, the male gonads were immediately removed and dissected. Representative fragments of each testis, epididymis (caput, corpus and cauda regions) and vas deferens were collected and cut into small uniform pieces. Each fragment of the right reproductive organs was fixed in 10% buffered formalin, processed, embedded in paraffin blocks, cut into 2 μm-thick sections and subjected into immunohistochemical staining. Morphological and morphometric analyses of the studied material were conducted in a previous study . Each representative region of the left reproductive organs was frozen in liquid nitrogen, stored at −80 °C and subsequently used for Western blot analysis. All antibodies used in the present study and their dilution for immunohistochemistry and Western blot are shown in . The specificity and antigenicity of commercially available antibodies against bovine AQP4, AQP5 and AQP6 have been proven in our previous work . Homology between peptides sequence used for developing antibodies and amino acid sequences of bovine AQP0, AQP1 and AQP2 were defined on the protein blast database ( https://blast.ncbi.nlm.nih.gov/Blast.cgi , accessed on 10 January 2024). Depending on the type of primary antibodies, labeling of the antigen–antibody complexes were visualized with the use of secondary polyclonal goat anti-rabbit or anti-mouse horseradish peroxidase-conjugated antibodies. Traditional methods of negative controls (i.e., omission of primary antibodies) were used in immunohistochemistry and Western blot. Immunohistochemistry was performed as previously described in detail by Michałek et al. . After blocking endogenous peroxidase activity, sections were subjected to heat-induced antigen retrieval in citrate buffer at pH 6.0 (for AQP6) or in Tris-EGTA-buffer at pH 9.0 (for AQP0, AQP1, AQP2, AQP4 and AQP5). Sections were incubated overnight at 4 °C with the validated primary antibodies against bovine AQP0, AQP1, AQP2, AQP4, AQP5 and AQP6. After washing, samples were incubated for 1 h at room temperature with corresponding secondary antibodies. Immune reactions were visualized using 3.3′-diaminobenzidine (DAB) chromogen solution (Dako, Glostrup, Denmark, cat. no. K3468). All sections were analyzed using an Olympus BX43 light microscope (Olympus, Hamburg, Germany). Specificity of immunostaining was confirmed by following the above procedures, except that the primary antibodies were replaced with the same amount of IgG from bovine serum (Sigma Aldrich, Darmstad, Germany, cat. no. I5506; negative controls). In order to confirm the antigenicity of anti-AQP0, -AQP1 and -AQP2, positive controls with the use of bovine lens (for AQP0) and kidney (for AQP1 and AQP2) were additionally performed. The immunohistochemical reactions were assessed by semiquantitative scoring based on labelling intensities as follows: no expression = “−”; weak = “+”; moderate = “++”; strong = “+++”. The procedure was performed as described previously by Michałek and Grabowska . Protein samples were separated on 12% Criterion TGX Stain-Free gels (Bio-Rad, Hercules, CA, USA; cat. no. 5678045) and transferred to PVDF membranes. After blocking, membranes were incubated overnight at 4 °C with primary antibodies against AQP0, AQP1, AQP2, AQP4 AQP5 and AQP6. The following day, membranes were washed and incubated at room temperature for 1 h with the corresponding secondary antibodies. Protein bands were visualized using an enhanced chemiluminescence system (Clarity TM Western ECL Substrate, Bio-Rad, Hercules, CA, USA, cat. No. 170-5061), and blot images were acquired in a ChemiDoc MP imaging system (Bio-Rad, Hercules, CA, USA). For positive controls, bovine protein extracts were used, i.e., lens (for AQP0), renal cortex (for AQP1 and AQP4), renal medulla (for AQP2 and AQP6) and parotid salivary glands and lungs (for AQP5). The obtained images were recorded in a digital form and modified (autoscaling was applied and a representative band was cut out) using CorelDRAW (version 21.3.0.755, Corel Corporation, Ottawa, ON, Canada). Due to the low signal of the detected bands for AQP0 and AQP5, quantitative analysis of their optical density was not performed. This study marks the first precise determination of the location and expression patterns of classical aquaporins throughout the bovine male reproductive system, from the testis to the vas deferens, across three animal age groups. These results have facilitated the development of a conceptual framework regarding the potential role of these proteins in maintaining water homeostasis in these crucial organs in male cattle. The presence of these aquaporins was recorded in various cell types and segments of the male reproductive tract, highlighting their importance in supporting proper reproductive function in cattle. The findings presented in this study represent a significant step forward in understanding the specific role of aquaporins in the physiology of the male reproductive system in mammals and provide a foundation for further research in this area. Combined with previously published data on the localization of aquaglyceroporins (AQP3, AQP7 and AQP9) in the same animals, these results represent another piece of the puzzle, forming a comprehensive picture of the distribution of these proteins in the bovine male reproductive system. |
Bulevirtide in Chronic Hepatitis D Patients Awaiting Liver Transplantation Results From a French Multicentric Retrospective Study | 48a1a251-930d-422f-94d4-900422a9d1fa | 11831879 | Surgical Procedures, Operative[mh] | Introduction Chronic hepatitis delta (CHD) poses a significant global health burden as it is the most severe form of chronic viral hepatitis. Patients with CHD are at an increased risk of developing liver cirrhosis, decompensated end‐stage liver disease and hepatocellular carcinoma (HCC) . The exact prevalence of CHD remains uncertain due to limited routine testing, but CHD is thought to affect ~2%–5% of chronic hepatitis B (CHB) carriers . Bulevirtide, a pioneering hepatitis B virus (HBV) entry inhibitor, has emerged as a promising therapeutic agent for CHD. By mimicking the sodium taurocholate co‐transporting polypeptide (NTCP) receptor‐binding domain, bulevirtide disrupts the entry of HDV and HBV into hepatocytes, thereby blocking viral spread . Phase 2 trials have demonstrated encouraging efficacy and safety profiles for bulevirtide, leading to its conditional approval for the treatment of compensated CHD by the European Medicines Agency (EMA) in July 2020 and full marketing authorization on July 2023 . In France, bulevirtide has been available since September 2019 through an early access program. However, the optimal duration of bulevirtide treatment is unknown, and the current guidelines recommend long‐term treatment . Currently, there are no licensed treatments for CHD‐related decompensated liver disease. Liver transplantation (LT) is the best option, with hepatitis B immunoglobulin (HBIg) and nucleos(t)ide analogs (NA) administered as standard prophylaxis post‐transplantation to avoid HBV recurrence . It is unclear whether bulevirtide treatment in patients on the LT waiting list for decompensated liver disease changes the disease course or helps downstage HCC patients as a bridge to LT. Therefore, this study aimed to gather real‐life data on bulevirtide use, safety and efficacy in patients awaiting LT or undergoing evaluation for LT for either decompensated cirrhosis or HCC and compare these data to a cohort of patients not receiving bulevirtide. Patients and Methods 2.1 Patients All consecutive HDV‐infected patients with cirrhosis who were on the liver transplant waiting list or underwent pretransplant evaluation since bulevirtide approval, whether prescribed bulevirtide alone or not, were included in this multicenter study in France. The study was conducted between January and October 2024 in accordance with the Declaration of Helsinki. All patients provided written informed consent, and the study was approved by the Institutional Review Board of Montpellier University Hospital (Number 2024‐05‐053). CHD was defined as persistent HDV RNA for more than 6 months. Cirrhosis was defined either noninvasively (liver stiffness measurement > 12 kPa), histologically (F4 stage fibrosis by METAVIR scoring), or by compatible clinical, laboratory and imaging data. Indications for liver transplant evaluation and waiting list inscription were decompensated cirrhosis, defined by overt ascites, encephalopathy or variceal bleeding. The Model for End‐stage Liver Disease (MELD) score was considered for prioritisation, with a minimum score of 15 required for candidacy. LT was also considered in cases of specific exceptions to MELD scores or in cases of unresectable HCC, considering the French alpha‐fetoprotein (AFP) model, provided that the AFP score was ≤ 2 . 2.2 Treatment Bulevirtide was self‐administered subcutaneously every 24 h. The decision to start bulevirtide treatment, treatment duration, the decision to co‐prescribe nucleos(t)ide analogs (NA; entecavir or tenofovir) or PegIFNα, and post LT treatment was at the discretion of the investigators. 2.3 Follow Up and Data Collection Clinical, biological and virological characteristics were collected at bulevirtide initiation (baseline), Week 24 (W24), Week 48 (W48), LT and after LT. For patients not receiving bulevirtide, data were collected at baseline (inscription on the LT list), LT, and after LT. Liver‐related events (ascites, variceal bleeding, encephalopathy, liver failure, HCC or LT) and adverse events were documented. Upper endoscopy was performed according to the current guidelines . Imaging for HCC surveillance was performed every 3 months as tumour status, AFP level, and MELD score must be updated at least every 3 months for all patients on the national waiting list for LT. Liver stiffness measurements (LSM) were performed using FibroScan (Echosens, Paris, France) at the discretion of the investigator, by a trained operator according to the manufacturer's recommendations, and following defined quality criteria . Hepatic encephalopathy was graded according to the West Haven criteria , and ascites were graded based on the International Ascites Club classification . HDV RNA was quantified using the EurobioPlex assay (Eurobio Scientific, Les Ulis, France), with a lower limit of detection of 20 UI/mL. HBV DNA and HBsAg were quantified using the Abbott Alinity i platform (Abbott Laboratories, Chicago, IL) according to the manufacturer's instructions, with a lower limit of detection of < 10 UI/mL. Retrospective data collection was conducted using medical files and the CRISTAL database, which is maintained by the French Agency for Transplantation (Agence de la Biomedicine) and prospectively records data for all patients on the LTwaiting list. 2.4 Outcomes The primary outcome was virological response at W48, defined as undetectable HDV RNA or ≥ 2‐log decrease compared to baseline in bulevirtide‐treated patients. Secondary outcomes were as follows: (1) virological non‐response defined as < 1 log HDV RNA decrease versus baseline in bulevirtide‐treated patients; (2) biochemical response defined as ALT normalisation versus baseline in bulevirtide‐treated patients; (3) combined response defined as both a virological and biochemical response in bulevirtide‐treated patients; (4) percentage of patients undergoing LT; (5) percentage of patients with liver‐related events during bulevirtide treatment or while on the waiting list (liver decompensation, new HCC, or HCC progression while on the waiting list); (6) safety of bulevirtide treatment; and (7) comparison between bulevirtide‐treated and untreated groups, including reasons for treatment abstention. 2.5 Statistical Analysis Statistical analyses were performed using EasyMedStat (v3.32; www.easymedstat.com ). Continuous variables are expressed as mean (±SD) for normally distributed data and as median and range for non‐normally distributed data. Categorical variables are presented as absolute and relative frequencies. Categorical variables were compared using the chi‐squared or Fisher's exact tests and continuous variables by Student's t ‐test, Mann–Whitney U test, or Kruskal–Wallis test, as appropriate. Repeated analyses were performed using the Friedman test. If the null hypothesis of this test was rejected, post hoc pairwise analyses were performed using Nemenyi's test. The Kaplan–Meier method was used to estimate survival probabilities from baseline until LT. The log‐rank non‐parametric test for the comparison of survival distributions was used to compare the survival differences. The alpha risk was set at 5%, and two‐tailed tests were used. The difference between baseline and W48 HDV RNA levels was assessed using the Wilcoxon signed‐rank test. Patients All consecutive HDV‐infected patients with cirrhosis who were on the liver transplant waiting list or underwent pretransplant evaluation since bulevirtide approval, whether prescribed bulevirtide alone or not, were included in this multicenter study in France. The study was conducted between January and October 2024 in accordance with the Declaration of Helsinki. All patients provided written informed consent, and the study was approved by the Institutional Review Board of Montpellier University Hospital (Number 2024‐05‐053). CHD was defined as persistent HDV RNA for more than 6 months. Cirrhosis was defined either noninvasively (liver stiffness measurement > 12 kPa), histologically (F4 stage fibrosis by METAVIR scoring), or by compatible clinical, laboratory and imaging data. Indications for liver transplant evaluation and waiting list inscription were decompensated cirrhosis, defined by overt ascites, encephalopathy or variceal bleeding. The Model for End‐stage Liver Disease (MELD) score was considered for prioritisation, with a minimum score of 15 required for candidacy. LT was also considered in cases of specific exceptions to MELD scores or in cases of unresectable HCC, considering the French alpha‐fetoprotein (AFP) model, provided that the AFP score was ≤ 2 . Treatment Bulevirtide was self‐administered subcutaneously every 24 h. The decision to start bulevirtide treatment, treatment duration, the decision to co‐prescribe nucleos(t)ide analogs (NA; entecavir or tenofovir) or PegIFNα, and post LT treatment was at the discretion of the investigators. Follow Up and Data Collection Clinical, biological and virological characteristics were collected at bulevirtide initiation (baseline), Week 24 (W24), Week 48 (W48), LT and after LT. For patients not receiving bulevirtide, data were collected at baseline (inscription on the LT list), LT, and after LT. Liver‐related events (ascites, variceal bleeding, encephalopathy, liver failure, HCC or LT) and adverse events were documented. Upper endoscopy was performed according to the current guidelines . Imaging for HCC surveillance was performed every 3 months as tumour status, AFP level, and MELD score must be updated at least every 3 months for all patients on the national waiting list for LT. Liver stiffness measurements (LSM) were performed using FibroScan (Echosens, Paris, France) at the discretion of the investigator, by a trained operator according to the manufacturer's recommendations, and following defined quality criteria . Hepatic encephalopathy was graded according to the West Haven criteria , and ascites were graded based on the International Ascites Club classification . HDV RNA was quantified using the EurobioPlex assay (Eurobio Scientific, Les Ulis, France), with a lower limit of detection of 20 UI/mL. HBV DNA and HBsAg were quantified using the Abbott Alinity i platform (Abbott Laboratories, Chicago, IL) according to the manufacturer's instructions, with a lower limit of detection of < 10 UI/mL. Retrospective data collection was conducted using medical files and the CRISTAL database, which is maintained by the French Agency for Transplantation (Agence de la Biomedicine) and prospectively records data for all patients on the LTwaiting list. Outcomes The primary outcome was virological response at W48, defined as undetectable HDV RNA or ≥ 2‐log decrease compared to baseline in bulevirtide‐treated patients. Secondary outcomes were as follows: (1) virological non‐response defined as < 1 log HDV RNA decrease versus baseline in bulevirtide‐treated patients; (2) biochemical response defined as ALT normalisation versus baseline in bulevirtide‐treated patients; (3) combined response defined as both a virological and biochemical response in bulevirtide‐treated patients; (4) percentage of patients undergoing LT; (5) percentage of patients with liver‐related events during bulevirtide treatment or while on the waiting list (liver decompensation, new HCC, or HCC progression while on the waiting list); (6) safety of bulevirtide treatment; and (7) comparison between bulevirtide‐treated and untreated groups, including reasons for treatment abstention. Statistical Analysis Statistical analyses were performed using EasyMedStat (v3.32; www.easymedstat.com ). Continuous variables are expressed as mean (±SD) for normally distributed data and as median and range for non‐normally distributed data. Categorical variables are presented as absolute and relative frequencies. Categorical variables were compared using the chi‐squared or Fisher's exact tests and continuous variables by Student's t ‐test, Mann–Whitney U test, or Kruskal–Wallis test, as appropriate. Repeated analyses were performed using the Friedman test. If the null hypothesis of this test was rejected, post hoc pairwise analyses were performed using Nemenyi's test. The Kaplan–Meier method was used to estimate survival probabilities from baseline until LT. The log‐rank non‐parametric test for the comparison of survival distributions was used to compare the survival differences. The alpha risk was set at 5%, and two‐tailed tests were used. The difference between baseline and W48 HDV RNA levels was assessed using the Wilcoxon signed‐rank test. Results 3.1 Baseline Characteristics Since the approval of Bulevirtide in France in September 2019, 41 HDV‐infected patients with cirrhosis have been listed for LT across nine French liver transplant centers. The patients were retrospectively included in this study between January and October 2024. The median time between BLV initiation and inscription on the LT was 5.68 (±12.51) months. Twenty (48.8%) patients received bulevirtide while on the LT waiting list (bulevirtide group). Table presents the baseline patient characteristics. The mean age of treated patients was 52.8 ± 9.98 years, and 75% were male. At baseline, 13 patients (65%) were classified as Child‐Pugh A, two (10%) as Child‐Pugh B, and five (25%) as Child‐Pugh C. Median baseline HDV RNA was 5.98 (2.23–10) log IU/mL. Concomitant PegIFNα was prescribed to three (15%) patients at the discretion of the investigator while on the LT waiting list. None of them presented a prior decompensation episode but all stopped PegIFNα before W24 due to: intolerance (1 patient) or after a decompensation episode (2 patients). For this 2 patients stopping PegIFNα resulted in recompensation of liver function and did not change evolution of liver disease. NAs were prescribed to 90% of patients, and two patients (10%) received bulevirtide monotherapy without NA, at the discretion of the investigators, as they had compensated liver disease and HDV DNA was undetectable. Active HCC was present at baseline in eight patients (40%), with four (50%) classified as BCLC‐0, three (37.5%) as BCLC‐B, and one (12.5%) as BCLC‐A. Twenty‐one (51.2%) patients did not receive BLV treatment while on the LT waiting list (control group), the characteristics are also summarised in Table . The mean age of these patients was 42.94 ± 7.9 years. At baseline, nine patients (42.9%) were classified as having refractory ascites, six patients (28.5%) had active hepatocellular carcinoma (HCC), and 20 (95.23%) patients had decompensated cirrhosis, with 16 (76.19%) classified as Child‐Pugh C. 3.2 Virological and Biochemical Responses in Bulevirtide Treated Patients on the Waiting‐List Median time of BLV treatment was 86.90 (±61.4) weeks and 12 (60%) patients received BLV beyond the Weeks 48 of treatment. Fifteen (75%) patients completed 48 weeks of BLV treatment while on the LT waiting list. The kinetics of HDV RNA is shown in Table . Among the five patients who did not complete 48 weeks of treatment, two were transplanted between Week 24 and 48 for compensated cirrhosis and active HCC, one was transplanted earlier for decompensated cirrhosis after having less than 24 weeks of treatment, and two patients were still on treatment but had less than 48 weeks of treatment at the end of the study. In per‐protocol analysis (PP) after excluding these ( n = 5) patients the median HDV RNA level at Week 48 was 2.0 log IU/mL (IQR 4.68). The median difference in HDV RNA between baseline and W48 was 2.56 log IU/mL IQR = 3.69; 95% CI 0.97–4.91; p = 0.004; (Figure ). At W48, HDV RNA was undetectable in eight (53.33%) patients. Virological response at W48 occurred in 11 (73.3%) patients and non‐response in three (20%) patients. In intention‐to‐treat (ITT) analysis virological response at Week 48 was 60% (Figure ). HBsAg levels and HBV DNA levels did not change significantly, and no patient cleared HBsAg while on the waiting list for LT. In PP analysis, median ALT level decreased from 91.0 (IQR 69.2) UI/mL at baseline to 33.5 (IQR 24.5) UI/mL at W48 (median difference 55.5, IQR = 66.75; 95% CI 24.5–109.0; p = 0.001). The ALT kinetics from baseline are shown in Table . At W48, ALT levels normalised in 10 (66.7%) patients that achieved biochemical response. At W48, eight (53.3%) patients achieved combined, virological and biochemical responses. In ITT analysis biochemical response was reached in 50% of patients (Figure ). Additionally, we did a subgroup analysis where we excluded the 3 patients that received combination therapy of BLV and PEG‐IFN. Among these 17 patients treated by BLV on monotherapy while on the waiting list 12 (70.6%) patients completed 48 weeks of BLV while on the LT list. In PP analysis, the mean HDV RNA level at baseline and at Week 48 was 5.57 log IU/mL (IQR 1.69) and 2.11 log IU/mL (IQR 2.6). The median difference HDV RNA between baseline and W48 was 3.78 log IU/mL (IQR = 3.77; 95% CI 2.13–5.65; p = 0.001). Virological response at Week 48 occurred in 9 (75%) of patients and non‐response in 2 (16.6%) patients. Biochemical response occurred in 7 (58.3%) patients. There was no statistically significant difference in virological or biochemical response rates between the BLV monotherapy group and the initial cohort including the 3 patients treated with combination therapy (Figure ). Three (20%) patients lost their virological responses between Weeks 24 and 48. Two were incomplete responders, as they did not achieve a biochemical response at Week 24, and were transplanted soon after. The third patient experienced a virological relapse starting from Week 24, but he remained stable with a persistent Child‐Pugh A score during the follow‐up and maintained biochemical response (Table Patient 2) Despite losing virological response he demonstrated favourable biochemical and clinical evolution by achieving biochemical response; by the end of the study 2 (10%), more patients experienced a virological breakthrough as they had consecutive increases in HDV‐RNA of ≥ 1 log 10 IU/mL starting from 98 and 142 weeks of treatment (Table patients 5 and 6 respectively). The clinical evolution of patients after inscription on LT is shown in Figure . Additionally evolution of patients after BLV initiation is detailed in Table and Figure . During the study, MELD scores increased in two patients, rising from 11 at baseline to 25 at LT, respectively from 9 at baseline to 31 at LT. The first patient received less than 24 weeks of bulevirtide treatment and had a Child‐Pugh score of C10 at LT, and the second had compensated Child‐Pugh A5 cirrhosis at baseline and presented with variceal bleeding at W50. Overall MELD evolution did not show significant MELD changes during the treatment course, with an overall median MELD of 9 (±6.13). The mean change for all patients during BLV study was +1.53 points. A total of 5 (25%) patients demonstrated MELD reduction below 15. The evolution of MELD scores during the study period is shown in Figure . 3.3 Virological and Biochemical Responses in Bulevirtide‐Treated Child C Patients Among the five (25%) decompensated Child‐Pugh C patients treated with bulevirtide (identified by patient numbers in Table ), virological response at week 48 was achieved in two patients (patients 3 and 5). Patient 1 underwent early liver transplantation at Week 7 after bulevirtide initiation because of worsening of liver function, while patient 2 initially achieved a virological response at Week 24 but experienced a virological relapse by Week 48 of treatment. Patient 4 did not demonstrate a virological response throughout the study period but showed progressive improvement in liver function with a MELD score that decreased from 31 to 13 after 48 weeks of bulevirtide treatment. The patient's liver function improved by Week 24, with ALT levels decreasing significantly from 10‐times the upper limit of normal (ULN) at baseline to twice the ULN and normalised by Week 48. Viral load decreased at Weeks 24 and 48 but less than 1 log decrease, so it was considered a virological non‐responder. The patient could be delisted from the LT waiting list and antiviral treatment was the only contributing factor. With respect to biochemical response in Child C patients, two patients (patients 2 and 3) achieved a response by Week 24, and two additional patients (patients 4 and 5) demonstrated a biochemical response at Week 48. Patient 5 showed continued improvement in liver function, with a Child‐Pugh score improving from B9 at Week 24 to A6 at Week 48, coinciding with both biochemical and virological responses. The patient was not removed from the transplant list but had a temporary contraindication due to improvement in liver function. However, he was ultimately transplanted at Week 142 following further decompensation unrelated to BLV use, but rather to disease progression. Detailed data are presented in Table . 3.4 Virological and Biochemical Responses in Bulevirtide Treated Child B Patients At the initiation of BLV, one patient (Table , patient 6) presented with refractory ascites. Cirrhosis was classified as Child‐Pugh B7 or MELD 10 and had grade 2 encephalopathy. The patient was listed for LT because a transjugular intrahepatic portosystemic shunt (TIPS) was contraindicated. By Week 24, the MELD score had slightly decreased to 9, while the Child‐Pugh score remained B7. By Week 48, both virological and biochemical responses were achieved, resulting in an improvement in the Child‐Pugh score to A6 and a further reduction in the MELD score to 8. The ascites and encephalopathy had fully resolved by Week 48, and the patient showed significant clinical improvement over the 33‐month follow‐up period, ultimately leading to its removal from the liver transplant waiting list. Moreover, one patient (12.5%) with hepatocellular carcinoma (HCC) at baseline had cirrhosis classified as Child‐Pugh B7 and HCC BCLC stage B (Table , patient 7). Chemoembolization was initially contraindicated due to liver function impairment, and the patient was listed for LT. Bulevirtide therapy was initiated, and by Week 24, a biochemical response was observed, with an improvement in liver function to Child‐Pugh A6. This improvement allowed the patient to become eligible for chemoembolization and downstaging therapy, which was previously unfeasible. At the time of LT, the patient was successfully downstaged to BCLC stage A, and exhibited a combined virological and biochemical response. In summary, among the 7 decompensated patients with Child‐Pugh B or C cirrhosis, 3 (42.9%) patients were delisted from the transplant waiting list, including 2 (40%) Child C patients and 1 (50%) Child B patients. In terms of virological response, 3 (42.8%) patients achieved a response at Week 48, including 2 (40%) Child C patients and 1 (50%) Child B patients. Biochemical response at Week 48 was observed in 5 (71.4%) patients, including 4 (80%) Child C patients and 1 (50%) Child B patients. Liver function improvement was noted in 6 (85.7%) patients, including 100% of Child B patients and 4 (80%) of Child C patients. 3.5 Virological and Biochemical Evolution in Bulevirtide Treated HCC Patients Seven (35%) patients had active HCC and compensated CHILD A cirrhosis at baseline and were treated with BLV while on the waiting list and having downstaging treatment. Five (25%) patients with compensated Child‐Pugh A cirrhosis developed new HCC at BCLC A stage while on BLV treatment. Specifically, two patients developed HCC at Week 24, one at Week 48, one at Week 120, and another at Week 92 of treatment, which was the reason for listing them for LT. The patient who developed HCC at Week 92 had prior radiofrequency ablation but was not prioritised for LT, as the HCC was inactive and French guidelines prioritise only upon reactivation. By the end of the study, three of these patients underwent LT, after a mean waiting time of 50.7 weeks (range 38.2–63.1 weeks). The data are presented in Table and Figure . The development of HCC did not appear to be related to bulevirtide treatment but rather to disease progression. 3.6 Liver Transplantation for HDV ‐Infected Patients Among the 20 patients in the BLV group, 12 (60%) patients underwent LT: one (8.3%) after less than 24 weeks of BLV, two (16.6%) after 24 weeks of BLV, and nine (75%) after at least 48 weeks of BLV. The median time from BLV treatment start to LT was 49.97 (± 37.39) weeks. For patients receiving a LT, the median log IU/mL HDV RNA at baseline and at LT were 5.43 (±1.91) and 3.11 (±2.44), respectively, and there was a significant decrease in HDV RNA between baseline and the LT ( p = 0.006). Among the eight patients with active HCC at baseline, five (62.5%) underwent LT, all with BCLC A stage at LT after a mean waiting time of 32.57 weeks (range 30.2–82.4). Among the 21 patients in the control‐group, a total of 20 patients (95.2%) underwent LT after a mean waiting time of 17.2 weeks (range, 1–32 weeks) following their listing for transplantation. One (4.8%) patient died while on the waiting list. Reasons for BLV treatment abstention were decompensated disease at inscription on the LT waiting list and need for off‐label prescriptions of bulevirtide. At 3 months, transplant free survival was 36.7% (95% CI: 16.9–56.8) months in the control group versus 76.9% (95% CI: 44.2–91.9) in the BLV group ( p = 0.00714). In a multivariate Cox regression analysis that included BLV treatment, age, and MELD score, only baseline MELD score was predictive of transplant‐free survival (HR 1.11, 95% CI: 1.04–1.18, p < 0.001). 3.7 Follow‐Up After LT After LT, the median follow‐up was 18.76 (± 10.9) months. Within this period, two patients (16.7%) died in the BLV group: one at 11 months due to de novo cholangiocarcinoma and another at 5.5 months from an unknown cause. These deaths were not related to the BLV treatment. None of the patients in the control group died. Post‐transplant HDV RNA was undetectable in all patients, and HBV DNA and HBs antigen were also negative. Post‐transplant antiviral treatment consisted of hepatitis B immunoglobulin (HBIg) administration, with 93.75% of patients receiving subcutaneous doses of 500 IU weekly and 3.12% receiving intravenous doses of 6000 IU monthly, as long‐term prophylaxis. Additionally, NAs were prescribed, with 53.1% of patients on entecavir and 43.6% on tenofovir and 3.1% on lamivudine. 3.8 Treatment Tolerance and Safety Among the 17 patients receiving at least 24 weeks of bulevirtide, two (11.7%) developed injection site reactions and pruritus and two (11.7%) reported fatigue and headache. After 24 weeks of bulevirtide, one (12.5%) patient developed refractory ascites, which was attributed to liver disease evolution rather than to bulevirtide. Among the 15 patients who completed 48 weeks of treatment, at W48, one (6.6%) had moderate fatigue and another (6.6%) had pruritus. No worsening of liver function was directly imputable to BLV. One Child‐Pugh class A patient with HCC presented with a transaminase flare at W24 attributed to chemoembolization rather than BLV, which rapidly resolved and did not impact BLV treatment. No ALT flares or significant ALT elevations with BLV utilisation were observed. Among the five Child‐Pugh C patients, treatment tolerance was good. One patient experienced pruritus, and another reported fatigue during the treatment period, but deterioration in liver function was not related to BLV treatment. Baseline Characteristics Since the approval of Bulevirtide in France in September 2019, 41 HDV‐infected patients with cirrhosis have been listed for LT across nine French liver transplant centers. The patients were retrospectively included in this study between January and October 2024. The median time between BLV initiation and inscription on the LT was 5.68 (±12.51) months. Twenty (48.8%) patients received bulevirtide while on the LT waiting list (bulevirtide group). Table presents the baseline patient characteristics. The mean age of treated patients was 52.8 ± 9.98 years, and 75% were male. At baseline, 13 patients (65%) were classified as Child‐Pugh A, two (10%) as Child‐Pugh B, and five (25%) as Child‐Pugh C. Median baseline HDV RNA was 5.98 (2.23–10) log IU/mL. Concomitant PegIFNα was prescribed to three (15%) patients at the discretion of the investigator while on the LT waiting list. None of them presented a prior decompensation episode but all stopped PegIFNα before W24 due to: intolerance (1 patient) or after a decompensation episode (2 patients). For this 2 patients stopping PegIFNα resulted in recompensation of liver function and did not change evolution of liver disease. NAs were prescribed to 90% of patients, and two patients (10%) received bulevirtide monotherapy without NA, at the discretion of the investigators, as they had compensated liver disease and HDV DNA was undetectable. Active HCC was present at baseline in eight patients (40%), with four (50%) classified as BCLC‐0, three (37.5%) as BCLC‐B, and one (12.5%) as BCLC‐A. Twenty‐one (51.2%) patients did not receive BLV treatment while on the LT waiting list (control group), the characteristics are also summarised in Table . The mean age of these patients was 42.94 ± 7.9 years. At baseline, nine patients (42.9%) were classified as having refractory ascites, six patients (28.5%) had active hepatocellular carcinoma (HCC), and 20 (95.23%) patients had decompensated cirrhosis, with 16 (76.19%) classified as Child‐Pugh C. Virological and Biochemical Responses in Bulevirtide Treated Patients on the Waiting‐List Median time of BLV treatment was 86.90 (±61.4) weeks and 12 (60%) patients received BLV beyond the Weeks 48 of treatment. Fifteen (75%) patients completed 48 weeks of BLV treatment while on the LT waiting list. The kinetics of HDV RNA is shown in Table . Among the five patients who did not complete 48 weeks of treatment, two were transplanted between Week 24 and 48 for compensated cirrhosis and active HCC, one was transplanted earlier for decompensated cirrhosis after having less than 24 weeks of treatment, and two patients were still on treatment but had less than 48 weeks of treatment at the end of the study. In per‐protocol analysis (PP) after excluding these ( n = 5) patients the median HDV RNA level at Week 48 was 2.0 log IU/mL (IQR 4.68). The median difference in HDV RNA between baseline and W48 was 2.56 log IU/mL IQR = 3.69; 95% CI 0.97–4.91; p = 0.004; (Figure ). At W48, HDV RNA was undetectable in eight (53.33%) patients. Virological response at W48 occurred in 11 (73.3%) patients and non‐response in three (20%) patients. In intention‐to‐treat (ITT) analysis virological response at Week 48 was 60% (Figure ). HBsAg levels and HBV DNA levels did not change significantly, and no patient cleared HBsAg while on the waiting list for LT. In PP analysis, median ALT level decreased from 91.0 (IQR 69.2) UI/mL at baseline to 33.5 (IQR 24.5) UI/mL at W48 (median difference 55.5, IQR = 66.75; 95% CI 24.5–109.0; p = 0.001). The ALT kinetics from baseline are shown in Table . At W48, ALT levels normalised in 10 (66.7%) patients that achieved biochemical response. At W48, eight (53.3%) patients achieved combined, virological and biochemical responses. In ITT analysis biochemical response was reached in 50% of patients (Figure ). Additionally, we did a subgroup analysis where we excluded the 3 patients that received combination therapy of BLV and PEG‐IFN. Among these 17 patients treated by BLV on monotherapy while on the waiting list 12 (70.6%) patients completed 48 weeks of BLV while on the LT list. In PP analysis, the mean HDV RNA level at baseline and at Week 48 was 5.57 log IU/mL (IQR 1.69) and 2.11 log IU/mL (IQR 2.6). The median difference HDV RNA between baseline and W48 was 3.78 log IU/mL (IQR = 3.77; 95% CI 2.13–5.65; p = 0.001). Virological response at Week 48 occurred in 9 (75%) of patients and non‐response in 2 (16.6%) patients. Biochemical response occurred in 7 (58.3%) patients. There was no statistically significant difference in virological or biochemical response rates between the BLV monotherapy group and the initial cohort including the 3 patients treated with combination therapy (Figure ). Three (20%) patients lost their virological responses between Weeks 24 and 48. Two were incomplete responders, as they did not achieve a biochemical response at Week 24, and were transplanted soon after. The third patient experienced a virological relapse starting from Week 24, but he remained stable with a persistent Child‐Pugh A score during the follow‐up and maintained biochemical response (Table Patient 2) Despite losing virological response he demonstrated favourable biochemical and clinical evolution by achieving biochemical response; by the end of the study 2 (10%), more patients experienced a virological breakthrough as they had consecutive increases in HDV‐RNA of ≥ 1 log 10 IU/mL starting from 98 and 142 weeks of treatment (Table patients 5 and 6 respectively). The clinical evolution of patients after inscription on LT is shown in Figure . Additionally evolution of patients after BLV initiation is detailed in Table and Figure . During the study, MELD scores increased in two patients, rising from 11 at baseline to 25 at LT, respectively from 9 at baseline to 31 at LT. The first patient received less than 24 weeks of bulevirtide treatment and had a Child‐Pugh score of C10 at LT, and the second had compensated Child‐Pugh A5 cirrhosis at baseline and presented with variceal bleeding at W50. Overall MELD evolution did not show significant MELD changes during the treatment course, with an overall median MELD of 9 (±6.13). The mean change for all patients during BLV study was +1.53 points. A total of 5 (25%) patients demonstrated MELD reduction below 15. The evolution of MELD scores during the study period is shown in Figure . Virological and Biochemical Responses in Bulevirtide‐Treated Child C Patients Among the five (25%) decompensated Child‐Pugh C patients treated with bulevirtide (identified by patient numbers in Table ), virological response at week 48 was achieved in two patients (patients 3 and 5). Patient 1 underwent early liver transplantation at Week 7 after bulevirtide initiation because of worsening of liver function, while patient 2 initially achieved a virological response at Week 24 but experienced a virological relapse by Week 48 of treatment. Patient 4 did not demonstrate a virological response throughout the study period but showed progressive improvement in liver function with a MELD score that decreased from 31 to 13 after 48 weeks of bulevirtide treatment. The patient's liver function improved by Week 24, with ALT levels decreasing significantly from 10‐times the upper limit of normal (ULN) at baseline to twice the ULN and normalised by Week 48. Viral load decreased at Weeks 24 and 48 but less than 1 log decrease, so it was considered a virological non‐responder. The patient could be delisted from the LT waiting list and antiviral treatment was the only contributing factor. With respect to biochemical response in Child C patients, two patients (patients 2 and 3) achieved a response by Week 24, and two additional patients (patients 4 and 5) demonstrated a biochemical response at Week 48. Patient 5 showed continued improvement in liver function, with a Child‐Pugh score improving from B9 at Week 24 to A6 at Week 48, coinciding with both biochemical and virological responses. The patient was not removed from the transplant list but had a temporary contraindication due to improvement in liver function. However, he was ultimately transplanted at Week 142 following further decompensation unrelated to BLV use, but rather to disease progression. Detailed data are presented in Table . Virological and Biochemical Responses in Bulevirtide Treated Child B Patients At the initiation of BLV, one patient (Table , patient 6) presented with refractory ascites. Cirrhosis was classified as Child‐Pugh B7 or MELD 10 and had grade 2 encephalopathy. The patient was listed for LT because a transjugular intrahepatic portosystemic shunt (TIPS) was contraindicated. By Week 24, the MELD score had slightly decreased to 9, while the Child‐Pugh score remained B7. By Week 48, both virological and biochemical responses were achieved, resulting in an improvement in the Child‐Pugh score to A6 and a further reduction in the MELD score to 8. The ascites and encephalopathy had fully resolved by Week 48, and the patient showed significant clinical improvement over the 33‐month follow‐up period, ultimately leading to its removal from the liver transplant waiting list. Moreover, one patient (12.5%) with hepatocellular carcinoma (HCC) at baseline had cirrhosis classified as Child‐Pugh B7 and HCC BCLC stage B (Table , patient 7). Chemoembolization was initially contraindicated due to liver function impairment, and the patient was listed for LT. Bulevirtide therapy was initiated, and by Week 24, a biochemical response was observed, with an improvement in liver function to Child‐Pugh A6. This improvement allowed the patient to become eligible for chemoembolization and downstaging therapy, which was previously unfeasible. At the time of LT, the patient was successfully downstaged to BCLC stage A, and exhibited a combined virological and biochemical response. In summary, among the 7 decompensated patients with Child‐Pugh B or C cirrhosis, 3 (42.9%) patients were delisted from the transplant waiting list, including 2 (40%) Child C patients and 1 (50%) Child B patients. In terms of virological response, 3 (42.8%) patients achieved a response at Week 48, including 2 (40%) Child C patients and 1 (50%) Child B patients. Biochemical response at Week 48 was observed in 5 (71.4%) patients, including 4 (80%) Child C patients and 1 (50%) Child B patients. Liver function improvement was noted in 6 (85.7%) patients, including 100% of Child B patients and 4 (80%) of Child C patients. Virological and Biochemical Evolution in Bulevirtide Treated HCC Patients Seven (35%) patients had active HCC and compensated CHILD A cirrhosis at baseline and were treated with BLV while on the waiting list and having downstaging treatment. Five (25%) patients with compensated Child‐Pugh A cirrhosis developed new HCC at BCLC A stage while on BLV treatment. Specifically, two patients developed HCC at Week 24, one at Week 48, one at Week 120, and another at Week 92 of treatment, which was the reason for listing them for LT. The patient who developed HCC at Week 92 had prior radiofrequency ablation but was not prioritised for LT, as the HCC was inactive and French guidelines prioritise only upon reactivation. By the end of the study, three of these patients underwent LT, after a mean waiting time of 50.7 weeks (range 38.2–63.1 weeks). The data are presented in Table and Figure . The development of HCC did not appear to be related to bulevirtide treatment but rather to disease progression. Liver Transplantation for HDV ‐Infected Patients Among the 20 patients in the BLV group, 12 (60%) patients underwent LT: one (8.3%) after less than 24 weeks of BLV, two (16.6%) after 24 weeks of BLV, and nine (75%) after at least 48 weeks of BLV. The median time from BLV treatment start to LT was 49.97 (± 37.39) weeks. For patients receiving a LT, the median log IU/mL HDV RNA at baseline and at LT were 5.43 (±1.91) and 3.11 (±2.44), respectively, and there was a significant decrease in HDV RNA between baseline and the LT ( p = 0.006). Among the eight patients with active HCC at baseline, five (62.5%) underwent LT, all with BCLC A stage at LT after a mean waiting time of 32.57 weeks (range 30.2–82.4). Among the 21 patients in the control‐group, a total of 20 patients (95.2%) underwent LT after a mean waiting time of 17.2 weeks (range, 1–32 weeks) following their listing for transplantation. One (4.8%) patient died while on the waiting list. Reasons for BLV treatment abstention were decompensated disease at inscription on the LT waiting list and need for off‐label prescriptions of bulevirtide. At 3 months, transplant free survival was 36.7% (95% CI: 16.9–56.8) months in the control group versus 76.9% (95% CI: 44.2–91.9) in the BLV group ( p = 0.00714). In a multivariate Cox regression analysis that included BLV treatment, age, and MELD score, only baseline MELD score was predictive of transplant‐free survival (HR 1.11, 95% CI: 1.04–1.18, p < 0.001). Follow‐Up After LT After LT, the median follow‐up was 18.76 (± 10.9) months. Within this period, two patients (16.7%) died in the BLV group: one at 11 months due to de novo cholangiocarcinoma and another at 5.5 months from an unknown cause. These deaths were not related to the BLV treatment. None of the patients in the control group died. Post‐transplant HDV RNA was undetectable in all patients, and HBV DNA and HBs antigen were also negative. Post‐transplant antiviral treatment consisted of hepatitis B immunoglobulin (HBIg) administration, with 93.75% of patients receiving subcutaneous doses of 500 IU weekly and 3.12% receiving intravenous doses of 6000 IU monthly, as long‐term prophylaxis. Additionally, NAs were prescribed, with 53.1% of patients on entecavir and 43.6% on tenofovir and 3.1% on lamivudine. Treatment Tolerance and Safety Among the 17 patients receiving at least 24 weeks of bulevirtide, two (11.7%) developed injection site reactions and pruritus and two (11.7%) reported fatigue and headache. After 24 weeks of bulevirtide, one (12.5%) patient developed refractory ascites, which was attributed to liver disease evolution rather than to bulevirtide. Among the 15 patients who completed 48 weeks of treatment, at W48, one (6.6%) had moderate fatigue and another (6.6%) had pruritus. No worsening of liver function was directly imputable to BLV. One Child‐Pugh class A patient with HCC presented with a transaminase flare at W24 attributed to chemoembolization rather than BLV, which rapidly resolved and did not impact BLV treatment. No ALT flares or significant ALT elevations with BLV utilisation were observed. Among the five Child‐Pugh C patients, treatment tolerance was good. One patient experienced pruritus, and another reported fatigue during the treatment period, but deterioration in liver function was not related to BLV treatment. Discussion Chronic hepatitis delta (CHD) remains a significant global health concern, as patients are at high risk of developing liver cirrhosis, decompensated end‐stage liver disease, and HCC. Degasperi et al. provided the first report on the efficacy and safety of 2 mg bulevirtide monotherapy for 48 weeks in patients with compensated cirrhosis and clinically significant portal hypertension with or without active HCC. In their study, liver function improved in four out of five patients with Child‐Pugh A6 disease, which represents the most important clinical parameter . Recently, Dietz‐Fricke et al. reported real‐world data on the off‐label use of bulevirtide in 19 patients with decompensated Child‐Pugh B liver disease, revealing similar safety and efficacy: their study demonstrated a virologic response rate of 74% after an average treatment duration of 17 weeks, and 47% of patients experienced a clinical improvement in liver function, mainly due to the resolution of ascites. It is unknown whether treating patients with advanced liver disease with bulevirtide could be an alternative to LT or prompt delisting due to clinical improvement. Our multicenter study provides supplementary real‐world data on the efficacy and safety of bulevirtide in CHD patients with advanced liver disease who are undergoing evaluation for LT or are waiting for LT. Our results demonstrate that 48 weeks of treatment with bulevirtide is well tolerated and associated with significant virological (73.3%) and biochemical responses (66.6%) in the majority of these patients. Notably, this included patients with decompensated Child‐Pugh C cirrhosis, where 60% (3 out of 5) improved to Child‐Pugh A status, and three (15%) patients from the total cohort were subsequently delisted due to liver function improvement. These safety and efficacy profiles have also been applied to patients with HCC undergoing downstaging therapy while on LT waiting list. Our findings are consistent with other real‐world studies on compensated or decompensated Child‐Pugh B cirrhosis where bulevirtide use demonstrates antiviral efficacy with respect to HDV RNA levels . The adverse events observed in our study, such as injection site reactions, fatigue and headache, were generally mild and manageable, confirming the reported favourable safety profile . As this was a real‐life, multicentric study, the decision to treat patients with BLV monotherapy or combination therapy (PEG‐IFN and BLV) was left to the discretion of the investigators. In our study three patients had combination therapy for less than 24 weeks and all of them had compensated cirrhosis prior to PEG‐IFN initiation. Adding PEG‐IFN did not change outcomes in term of virological or biochemical response, but the number of treated patients was small (15%) and treatment duration off with less than 24 weeks. The rationale for initiating PEG‐IFN combination therapy comes from findings from our French BuleDelta study (NCT04166266), which demonstrated that 72.2% of patients receiving combination therapy achieved a virological response after 96 weeks, compared to 55% in the monotherapy group ( p < 0.01). Furthermore, the same study reported that statistically significant virological and biochemical changes between the combination therapy group and the monotherapy group were seen only after 48 weeks of treatment ( p < 0.0001) . Additionally, we performed a comparative analysis with an untreated similar patient cohort from all the participating centers. This analysis showed that patients treated with bulevirtide presented with less severe liver disease at the time of listing, as evidenced by lower Child‐Pugh and MELD scores, compared with untreated patients, and exhibited a higher prevalence of HCC. We highlight the case of a Child‐Pugh B patient for whom bulevirtide treatment led to significant improvement in liver function, allowing for successful HCC downstaging and prevention from transplant waitlist dropout. Moreover, biochemical response to bulevirtide allowed five (62.5%) HCC patients from our cohort to receive loco‐regional therapy while on the waiting list, underscoring the potential of bulevirtide to enhance access to HCC treatment options, enabling tumour downstaging and reducing dropout rates. In univariable analysis three‐month transplant‐free survival rate was significantly higher in the bulevirtide‐treated group (76.9%, 95% CI: 44.2–91.9) compared to the untreated group (36.7%, 95% CI: 16.9–56.8; 0.007), suggesting benefit from bulevirtide treatment in improving short‐term outcomes in patients on the waiting‐list. However, due to the small size cohort and differences in clinical characteristics between treated and untreated patients, in multivariable analysis, only MELD predicted of transplant‐free survival (HR, 1.11; 95% CI: 1.04–1.18, p < 0.001). To our knowledge, this is the first study to report safe off‐label use of bulevirtide in a cohort of decompensated Child C patients and reports delisting of 15% of patients following significant liver function improvement induced by bulevirtide. This outcome was not observed in any patient from the untreated cohort, where neither Child‐Pugh score nor MELD score significantly differed between the time of listing and LT. Interestingly, one patient from the treated cohort experienced liver function improvement despite not achieving a virological response, which is consistent with prior studies suggesting that biochemical response alone can lead to clinical benefit . Based on our experience, bulevirtide does not seem to alter the evolution of decompensated liver disease, as also suggested by Diet‐Fricke et al. , but may contribute to disease stabilisation in some patients. Biochemical response at week 48 was associated with liver function improvement in 85% of cases while persistent abnormal liver function was linked to poor prognosis. However, the small sample size limits the ability to draw statistically significant conclusions We observed that the MELD score did not change significantly from baseline to LT, possibly because of the small sample size limiting statistical power; therefore, we could not determine a specific MELD score that predict the futility of bulevirtide treatment consecutively we cannot recommend a definitive clinical approach. However, our findings suggest that in clinical practice bulevirtide may benefit and be proposed to patients with a lower MELD score (< 20) or those needing MELD exceptions, such as those with refractory ascites or untreatable HCC with AFP score ≤ 2 and MELD < 20, who face delays in accessing LT. It is likely futile to treat patients with MELD scores > 25, as they have a low chance of rapid improvement and may benefit more from transplantation. Since transplant patients are already protected from HBV and HDV recurrence by treatments such as NA and HBIg, further reduction of HDV RNA in patients undergoing LT is not needed from a strictly virological point of view. However, in the context of organ shortages, it is important to develop a strategy to optimise transplant utility and avoid futile transplantation. We also reported a 10% rate of late virological breakthrough. Although we did not perform a sequencing analysis, it is possible that these patients had NTCP polymorphisms rather than treatment‐induced resistance, as suggested by Hollnberger et al. and they may benefit in the near future from other emerging therapies . Another hypothesis for incomplete viral response or virological breakthrough could be related to possible differences in BLV kinetics and exposure between patients with decompensated liver disease and those with preserved liver function, potentially reducing drug efficacy but this needs further research. This study has several limitations. Its retrospective design and small sample size may restrict the generalizability of the findings. Treatment decisions and timing of treatment introduction was at the investigators' discretion, moreover treatment was prescribed off‐label in cases of decompensated cirrhosis. Additionally, patients treated with bulevirtide had a better initial liver function which may have influenced outcomes. Also, the study was conducted in France where treatment is reimbursed and included patients from tertiary referral centers. Consequently, larger, prospective controlled trials with extended follow‐up periods are necessary to validate these results. Despite these limitations, our study reports the outcomes and safety of bulevirtide treatment in the setting of LT. We also demonstrated favourable treatment outcomes in decompensated CHILD C cirrhosis that resulted in delisting of patients while in HCC patients treatment favoured downstaging strategies. In addition, we performed a comparative analysis with a similar untreated cohort. In conclusion, our study highlights the potential use of bulevirtide in patients with CHD on the LT waiting list. Bulevirtide demonstrated a favourable efficacy and safety profile along with notable clinical benefits. If confirmed in larger studies, these findings suggest that bulevirtide could be effective in improving liver function in patients with decompensated cirrhosis and may facilitate bridging therapies to transplantation, particularly for those with hepatocellular carcinoma. Therefore, Bulevirtide could be a game‐changer in pretransplant settings. All authors contributed to the data interpretation and reviewed, revised, and approved the manuscript. Drs. Meszaros and Pageaux had full access to all the data in the study and took responsibility for the integrity and accuracy of the data analysis. Concept and design: Drs Meszaros, Pageaux, Dumortier, Dharancy. Statistical analysis: Dr. Meszaros. The ethics committee of the University Hospital of Montpellier granted ethical approval. The authors declare no conflicts of interest. Figure S1. |
Intra- and intermuscular variations of postmortem protein degradation for PMI estimation | f458b0f7-cc5d-4800-83e1-5f95582be2d8 | 7417396 | Pathology[mh] | Time since death estimation is a crucial aspect in forensic routine and yet often remains unsuccessful considering the currently available methodical spectrum. In recent years, several approaches based on postmortem decomposition of biomolecules, particularly proteins, have been suggested for additional delimitation of the postmortem interval (PMI) . Although there are numerous aspects to be investigated, possibly affecting the outcome of the analysis and thus the accuracy and reliability of the method, some approaches appear promising candidates for future casework. Especially in skeletal muscle, protein degradation has been extensively investigated, depicting beneficial assets as a target tissue. Skeletal muscle tissue is well protected from environmental influences and at the same time easy to access and to handle . Additionally, it represents the largest homogenous compartment of the human body, contributing to 30–40% of bodyweight . This constitutes a major advantage in both research (e.g., multiple samples can be taken and analyzed, without interference by previous samplings) and practical application (e.g., even in severely injured bodies or body parts, there is a high chance to be able to collect unaffected tissue for analysis). Available data, however, is largely limited to thigh muscle , with few exceptions (e.g., M. psoas , M. gastrocnemius ). To fully benefit from the described advantages of muscle tissue, further research is required regarding decomposition similarities and deviations within individual muscles, between muscles, and between muscle types (skeletal muscle, cardiac and smooth muscle). If there is no difference in degradation patterns of muscle tissue, a highly conserved mechanism of decomposition is indicated supporting the opportunity to sample any (type of) muscle available for forensic PMI estimation. If, however, differences between muscles are detected, there could be a possibility for succession patterns: If a degradation event has occurred in muscle A but not (yet) in muscle B, more precise PMI estimations could be possible when several muscles are analyzed. Additionally, the temporal range of the method could eventually be extended. Considering that physical circumstances (e.g., temperature dependence of the postmortem breakdown of proteins and differential cooling of body compartments ) as well as physiological aspects (e.g., in vivo variations ) can affect degradation, inter- and intramuscular deviations can also be expected in humans. Since different muscle groups have different proportions of muscle fiber types, decomposition patterns might deviate, as faster degradation of muscle proteins in type II fibers compared with type I fibers had been demonstrated in pigs . Within an individual muscle, similar variations can be caused by a larger share of type I muscle fibers in deeper regions and in the close proximity to bones and tendons . The vicinity to a tendon might also alter data outcome. Increased amounts of collagen (connective tissue) close to myotendinous junctions can entail lower content of target proteins in the sampled muscle specimen. To address the question whether protein degradation occurs in the same fashion and in a similar time sequence within an individual muscle, as well as in different muscles and muscle types, we designed a pilot study analyzing muscle samples from three forensic autopsy cases with varying PMI and (morphological) degree of decomposition. Different locations of M. vastus lateralis were analyzed for intramuscular variance. To investigate intermuscular differences, degradation of M. vastus lateralis was compared with that of M. temporalis (jaw muscle) and M. longitudinalis superior linguae (tongue muscle), both expected to be less affected by interindividual conditions including training, injury, aging, etc. To analyze similarities and/or differences in muscle types, skeletal muscle ( M. vastus lateralis ) was compared with cardiac muscle (myocard) and smooth muscle (pyloric sphincter) samples. Included cases Muscle samples from three autopsy cases with varying degree of decomposition were analyzed in the course of this study (Table ). Case A was a 33-year-old male, who got stabbed in a conflict. Despite resuscitation he died immediately after being transferred to hospital. Time between death and autopsy was about 57 h. The corpse presented fully developed rigor mortis. Due to substantial blood loss, livores were rare. No signs of decomposition were present. This case was classified as “fresh.” Case B is a 54-year-old male died of drowning. The exact time of death is unknown. He went missing by the end of December and was found early February at a hydroelectric power plant (water temperature approximately 3 °C). His corpse depicted reddish green discoloration of the skin and showed changes associated with postmortem immersion, such as washerwoman’s skin and slippage of the epidermis. There was no rigor mortis in the joints and postmortem lividity was pale. This case was classified as “early decomposition.” Case C is a 69-year-old male, who died of subdural hemorrhage. He was found in the bathroom of his apartment (room temperature approximately 20 °C) after a neighbor reported a foul smell to the police. He was last seen 10 days ago, which corresponded to the content of his postbox. His corpse showed a dark brown to greenish discoloration of the skin, colonization by blow fly larvae (including pupae), dry skin, epidermal loss especially at the trunk, and a partial loss of fingernails. Postmortem lividity could not be detected because of the coloration of the skin. There was no rigor mortis. This case was classified as “advanced decomposition.” Notably, information on the PMI is highly imprecise for cases B and C, which is the case in most of the advanced decomposed corpses. Also, even though the maximum possible PMI for case B exceeds the one of case C, the higher environmental temperature distinctly accelerated the appearance of postmortem changes in the latter. As the focus of the present study was to investigate eventual intraindividual differences, also cases with a lack of precise according information were included. However, no assignment of protein degradation events to specific PMIs or timeframes should be made, also given the small sample size. Sampling and sample preparation In course of the autopsies, small muscle samples were collected from seven different body regions (Table ), trimmed to approximately 5 × 5 × 5 mm, and snap frozen and stored in liquid nitrogen until further processing. All samples were homogenized by cryogenic grinding and subsequent sonication. 10 × vol/wt RIPA buffer together with a protease inhibitor cocktail was used as lysis and extraction buffer. After centrifugation (10 min at 1000× g ), the supernatant was transferred into a fresh test tube and frozen at − 20 °C until further use. Protein concentrations were determined using BCA assay. All samples were diluted to equal overall protein content prior to analysis. Analysis of protein degradation Electrophoreses (SDS-PAGE) were run on 10% polyacrylamide resolving gels and 5% stacking gels, according to a standard protocol . Thirty micrograms of total protein were prepared, denatured at 90 °C for 5 min, and inserted into the gel wells. Following electrophoresis, the proteins were transferred onto blotting membranes (polyvinylidene fluoride (PVDF)) and stored at − 20 °C until further use. Prior to immunolabeling, the membranes were blocked in blocking buffer (tris-buffered saline (TBS) with 1% BSA). The following primary antisera were used: mouse monoclonal anti-α-actinin, mouse monoclonal anti-α-tubulin, and mouse monoclonal anti-vinculin. HRP-conjugated polyclonal goat anti-mouse was applied as a secondary antibody. All antibodies were diluted in blocking agent and applied for 1 h. After antibody application, the membranes were extensively washed and rinsed in TBS. Antibody staining was visualized by application of chemiluminescence substrate and documented using a digital gel analysis system (Fusion FX7, Peqlab Biotechnology). Protein band intensities were measured using ImageJ software (ImageJ 1.45s, Java 1.6.0_20). Alterations, such as the disappearance of a native band or appearance of additional bands, were considered degradation events. Signals < 1% the intensity of the native bands were considered background and thus no band. For the depiction in the included figures, lanes were cropped, pasted, and adjusted for brightness and contrast. Muscle samples from three autopsy cases with varying degree of decomposition were analyzed in the course of this study (Table ). Case A was a 33-year-old male, who got stabbed in a conflict. Despite resuscitation he died immediately after being transferred to hospital. Time between death and autopsy was about 57 h. The corpse presented fully developed rigor mortis. Due to substantial blood loss, livores were rare. No signs of decomposition were present. This case was classified as “fresh.” Case B is a 54-year-old male died of drowning. The exact time of death is unknown. He went missing by the end of December and was found early February at a hydroelectric power plant (water temperature approximately 3 °C). His corpse depicted reddish green discoloration of the skin and showed changes associated with postmortem immersion, such as washerwoman’s skin and slippage of the epidermis. There was no rigor mortis in the joints and postmortem lividity was pale. This case was classified as “early decomposition.” Case C is a 69-year-old male, who died of subdural hemorrhage. He was found in the bathroom of his apartment (room temperature approximately 20 °C) after a neighbor reported a foul smell to the police. He was last seen 10 days ago, which corresponded to the content of his postbox. His corpse showed a dark brown to greenish discoloration of the skin, colonization by blow fly larvae (including pupae), dry skin, epidermal loss especially at the trunk, and a partial loss of fingernails. Postmortem lividity could not be detected because of the coloration of the skin. There was no rigor mortis. This case was classified as “advanced decomposition.” Notably, information on the PMI is highly imprecise for cases B and C, which is the case in most of the advanced decomposed corpses. Also, even though the maximum possible PMI for case B exceeds the one of case C, the higher environmental temperature distinctly accelerated the appearance of postmortem changes in the latter. As the focus of the present study was to investigate eventual intraindividual differences, also cases with a lack of precise according information were included. However, no assignment of protein degradation events to specific PMIs or timeframes should be made, also given the small sample size. In course of the autopsies, small muscle samples were collected from seven different body regions (Table ), trimmed to approximately 5 × 5 × 5 mm, and snap frozen and stored in liquid nitrogen until further processing. All samples were homogenized by cryogenic grinding and subsequent sonication. 10 × vol/wt RIPA buffer together with a protease inhibitor cocktail was used as lysis and extraction buffer. After centrifugation (10 min at 1000× g ), the supernatant was transferred into a fresh test tube and frozen at − 20 °C until further use. Protein concentrations were determined using BCA assay. All samples were diluted to equal overall protein content prior to analysis. Electrophoreses (SDS-PAGE) were run on 10% polyacrylamide resolving gels and 5% stacking gels, according to a standard protocol . Thirty micrograms of total protein were prepared, denatured at 90 °C for 5 min, and inserted into the gel wells. Following electrophoresis, the proteins were transferred onto blotting membranes (polyvinylidene fluoride (PVDF)) and stored at − 20 °C until further use. Prior to immunolabeling, the membranes were blocked in blocking buffer (tris-buffered saline (TBS) with 1% BSA). The following primary antisera were used: mouse monoclonal anti-α-actinin, mouse monoclonal anti-α-tubulin, and mouse monoclonal anti-vinculin. HRP-conjugated polyclonal goat anti-mouse was applied as a secondary antibody. All antibodies were diluted in blocking agent and applied for 1 h. After antibody application, the membranes were extensively washed and rinsed in TBS. Antibody staining was visualized by application of chemiluminescence substrate and documented using a digital gel analysis system (Fusion FX7, Peqlab Biotechnology). Protein band intensities were measured using ImageJ software (ImageJ 1.45s, Java 1.6.0_20). Alterations, such as the disappearance of a native band or appearance of additional bands, were considered degradation events. Signals < 1% the intensity of the native bands were considered background and thus no band. For the depiction in the included figures, lanes were cropped, pasted, and adjusted for brightness and contrast. All samples were collected and processed as intended. Photometric determination of the total protein concentration revealed sufficient amounts (2.0–6.5 μg/μl) for all samples. There were no signs of irregular electrophoresis runs or Western blot experiments detected, and all protein bands could be analyzed according to standard protocols. Intramuscular comparison A native α-actinin band at approximately 100 kDa was present in all M. vastus lateralis samples tested in all three cases. Additionally, case C depicted an 80 kDa degradation product in all samples. Although the signal in the sample from the muscle center was comparably weak, it was clearly above the detection threshold and therefore considered present. In all samples collected from cases A and B, a distinct native α-tubulin band was detected at approximately 53 kDa. This band could not be found in any of the samples from case C. None of the samples depicted any α-tubulin degradation products. Native vinculin bands at approximately 117 kDa were detected in all samples analyzed. Although faint in all samples collected from case C, native vinculin bands were above the detection threshold. Meta-vinculin bands were exclusively found in all samples collected from case A. A vinculin degradation product at 84 kDa was present in all case B and C samples, but in none of the case A samples. Similarly, a 75 kDa degradation product was only detectable in samples from the two cases with early (case B, distal and medial M. vastus lateralis samples) and advanced decomposition (case C, all samples). A third vinculin degradation product of approximately 63 kDa was exclusively present in all samples collected from case C (Fig. ). Intermuscular comparison of skeletal muscles A native α-actinin band (100 kDa) was present in all muscle samples of cases A and B. In case C, this band was only present in the samples collected from M. vastus lateralis and M. longitudinalis superior linguae , but not in M. temporalis . In the same samples, an α-actinin degradation product at 80 kDa was detected. In M. temporalis collected from case C, another degradation product at approximately 50 kDa was exclusively present. Native α-tubulin bands were present in all samples collected from cases A and B, but not in case C samples. A weak signal of an α-tubulin degradation product at 50 kDa was found in the M. longitudinalis superior linguae sample of case A, but none in other sample. The native vinculin band was detected in analyzed samples from cases A and B, but only in the M. vastus lateralis sample of case C. Meta-vinculin bands were exclusively found in all samples collected from case A. Degradation products at 84 kDa were detected in all samples of case B and the M. vastus lateralis and M. longitudinalis superior linguae sample of case C. Neither the M. temporalis sample of case C nor any of the samples from case A depicted this 84 kDa degradation product. A second vinculin degradation product at 75 kDa was detected in the M. temporalis sample from case B and all samples from case C. Ultimately, a 63 kDa degradation product was observed in all case C samples exclusively (Fig. ). Intermuscular comparison of muscle types A native α-actinin band at approximately 100 kDa was present in all samples tested. Degradation products at 80 kDa were detected in all samples collected from case C. Additionally, numerous protein bands of different molecular weights appeared in the pyloric sphincter samples from cases A and C. The myocard sample from case C depicted a single additional α-actinin degradation product at approximately 50 kDa. A native α-tubulin band was present in the skeletal muscle and cardiac muscle samples of cases A and B and the smooth muscle collected from case B. Additionally, a 50 kDa degradation product was detected in the myocard samples of cases A and B as well as the pyloric sphincter sample from case B. No α-tubulin bands were detected in any of the case C samples. A native vinculin band was present in the skeletal muscle samples of all three cases, as well as the myocard samples from A and B and the pyloric sphincter of case C. The skeletal muscle and cardiac muscle samples of case A were the only ones to depict meta-vinculin bands. A 84 kDa degradation product was present in all samples, except the skeletal muscle sample from case A. A fragment of 75 kDa was detectable in all samples from case C, as well as the pyloric sphincter sample from case B. Another degradation product with a molecular weight of 63 kDa was found in all samples taken from the pyloric sphincter and in all samples from case C. Additional, smaller fragments appeared in the pyloric sphincter sample of case A (Fig. ). A native α-actinin band at approximately 100 kDa was present in all M. vastus lateralis samples tested in all three cases. Additionally, case C depicted an 80 kDa degradation product in all samples. Although the signal in the sample from the muscle center was comparably weak, it was clearly above the detection threshold and therefore considered present. In all samples collected from cases A and B, a distinct native α-tubulin band was detected at approximately 53 kDa. This band could not be found in any of the samples from case C. None of the samples depicted any α-tubulin degradation products. Native vinculin bands at approximately 117 kDa were detected in all samples analyzed. Although faint in all samples collected from case C, native vinculin bands were above the detection threshold. Meta-vinculin bands were exclusively found in all samples collected from case A. A vinculin degradation product at 84 kDa was present in all case B and C samples, but in none of the case A samples. Similarly, a 75 kDa degradation product was only detectable in samples from the two cases with early (case B, distal and medial M. vastus lateralis samples) and advanced decomposition (case C, all samples). A third vinculin degradation product of approximately 63 kDa was exclusively present in all samples collected from case C (Fig. ). A native α-actinin band (100 kDa) was present in all muscle samples of cases A and B. In case C, this band was only present in the samples collected from M. vastus lateralis and M. longitudinalis superior linguae , but not in M. temporalis . In the same samples, an α-actinin degradation product at 80 kDa was detected. In M. temporalis collected from case C, another degradation product at approximately 50 kDa was exclusively present. Native α-tubulin bands were present in all samples collected from cases A and B, but not in case C samples. A weak signal of an α-tubulin degradation product at 50 kDa was found in the M. longitudinalis superior linguae sample of case A, but none in other sample. The native vinculin band was detected in analyzed samples from cases A and B, but only in the M. vastus lateralis sample of case C. Meta-vinculin bands were exclusively found in all samples collected from case A. Degradation products at 84 kDa were detected in all samples of case B and the M. vastus lateralis and M. longitudinalis superior linguae sample of case C. Neither the M. temporalis sample of case C nor any of the samples from case A depicted this 84 kDa degradation product. A second vinculin degradation product at 75 kDa was detected in the M. temporalis sample from case B and all samples from case C. Ultimately, a 63 kDa degradation product was observed in all case C samples exclusively (Fig. ). A native α-actinin band at approximately 100 kDa was present in all samples tested. Degradation products at 80 kDa were detected in all samples collected from case C. Additionally, numerous protein bands of different molecular weights appeared in the pyloric sphincter samples from cases A and C. The myocard sample from case C depicted a single additional α-actinin degradation product at approximately 50 kDa. A native α-tubulin band was present in the skeletal muscle and cardiac muscle samples of cases A and B and the smooth muscle collected from case B. Additionally, a 50 kDa degradation product was detected in the myocard samples of cases A and B as well as the pyloric sphincter sample from case B. No α-tubulin bands were detected in any of the case C samples. A native vinculin band was present in the skeletal muscle samples of all three cases, as well as the myocard samples from A and B and the pyloric sphincter of case C. The skeletal muscle and cardiac muscle samples of case A were the only ones to depict meta-vinculin bands. A 84 kDa degradation product was present in all samples, except the skeletal muscle sample from case A. A fragment of 75 kDa was detectable in all samples from case C, as well as the pyloric sphincter sample from case B. Another degradation product with a molecular weight of 63 kDa was found in all samples taken from the pyloric sphincter and in all samples from case C. Additional, smaller fragments appeared in the pyloric sphincter sample of case A (Fig. ). Standardized protocols and awareness of the limitations of a method are of utmost importance for forensic application of PMI estimation methods. As such, it is inevitable to know whether there are intraindividual variations of (PMI) markers or measurement sites. Despite the small sample size of this study, the obtained results provide valuable data for methodic considerations and future research. The progress of protein degradation in different sample sites within a specific muscle ( M. vastus lateralis ) revealed similar, well comparable, and yet not completely identical profiles. In one of the autopsy cases (B), a single protein fragment (75 kDa vinculin) was not detected in a sample collected from the central muscle region. A temperature effect due to differential postmortem body cooling can be excluded in this case, as the medial sample can be expected to remain at higher temperature for a longer time span, whereas the distal site cools faster . As the postmortem development of degradation products has to be considered a gradual process, a signal can result above or beyond detection limit in different samples at a specific time point. This stochastic effect, however, represents a minor problem for PMI estimations as long as several proteins (and degradation products) are considered and appropriate mathematical models, including confidence intervals are applied. Notably, all other protein patterns from samples originating from different locations within M. vastus lateralis were identical within each case. Especially considering that extreme sampling positions were selected (1 cm distance to bone and tendon), the results suggest that intramuscular variations of protein degradation patterns can be largely disregarded. This endorses experimental designs with multiple samplings from the same muscle at different time points in research [e.g., , ], as well as avoidance of injured or by any means affected locations (e.g., in dismembered body parts) for forensic PMI estimation [e.g., , ]. In different skeletal muscles, discrete degradation patterns with increasing degree of morphological decomposition were detected. However, distinct differences between all investigated muscles were observed. Here, we compared muscles that can be considered less susceptible to in vivo variations such as training, injury, and atrophy ( M. temporalis and M. longitudinalis superior linguae ) and compared them with M. vastus lateralis , which served as a target muscle in previous studies . Especially M. temporalis showed advanced degradation compared with the other muscles in case C. The native bands of α-actinin and vinculin and some degradation products were lacking in this sample. Methodical errors can be excluded, as other degradation products were detected. Also the present 75 kDa vinculin band detected in the M. temporalis muscle of case B supports the findings of enhanced decomposition in this muscle, although, as mentioned above, this has to be viewed with some caution. M. longitudinalis superior linguae as well showed some deviations in degradation patterns compared with the other muscles. Specifically, this regards to an α-tubulin degradation product in the cases A and B, an approximately 100 kDa vinculin fragment in case B, and the loss of the native vinculin band in case C. Possible explanations for these differences include the varying composition on behalf of fiber types, connective tissue, and vascularization in skeletal muscles , according to their function (tonic, fast or slow twitch, etc.). Additionally, varying postmortem cooling rates in different parts of the body can significantly affect protein degradation. Proteolysis is a metabolic process and as such highly dependent on (environmental) temperature . While superficial structures and organs adjust to ambient temperature very quickly postmortem, internal sites in the body core can maintain elevated temperatures for more than 24 h . A similar effect has also been described in context with electrical excitability of skeletal muscles . However, the muscles selected as well as the small sample size of this study are not appropriate to test this hypothesis. Additional research, for example, comparing a distant, superficial muscle (e.g., M. gastrocnemius ) to a deep proximal muscle (e.g., M. psoas major ) in a larger sample size, would be necessary. Nevertheless, all tested skeletal muscles included in this study depicted decomposition-related changes that can potentially be used as markers for PMI estimation. While the (expected) fewer influence of training and aging in M. temporalis and M. longitudinalis superior linguae can be beneficial to reduce interindividual variations in estimation models, these muscles are definitely more difficult to investigate in non-autopsy settings (e.g., more complex sampling) compared with large limb muscles. However, varying decomposition speeds in different skeletal muscles have the potential to increase both the precision and applicable timeframe for PMI estimation when several muscles are analyzed. Standard protocols, established for the analysis of skeletal muscle protein degradation , worked well also for cardiac and smooth muscle samples. Results suggest an acceptable transferability of protocols across investigated muscles, which should always be carefully validated prior to outcome generation as differing proteins and protein isoforms and antibody specificity might limit the application . Myocard samples revealed successive protein degradation with advancing PMI of the corpses which all were generally similar to the patterns found in skeletal muscle. In comparison with vastus lateralis muscle, several signs of advanced decomposition were detected, such as α-actinin in case C, α-tubulin in case A and B, and vinculin in cases A and C. At no point, it was vice versa. This clearly suggests enhanced decomposition speed in cardiac muscle tissue compared with M. vastus lateralis . Whether this is due to the proximity to the body core and/or cell type specific metabolism can only be speculated. No valid assertions on postmortem protein degradation of smooth muscle can be made based on the obtained data. Protein analysis of the pyloric sphincter showed extreme deviations (a multitude of protein fragments of varying molecular weight in α-actinin and vinculin in cases A and C and a complete lack of bands in α-tubulin in case A) to all other muscles sampled. It can be assumed that this is a consequence of the proximity to aggressive acidic environment in the stomach and digestive enzymes, especially proteases from the pancreas, much rather than PMI-dependent decomposition. Interestingly, case B did not depict such dramatic changes. Here, we refrain from speculations to correlations with the time point of the last meal or the cause of death. Yet, analyses of the samples of case B indicate general applicability of the protocols to smooth muscle tissue. There is weak evidence for advanced decomposition of smooth muscle with the presence of α-tubulin and vinculin degradation products in case B compared with the other muscle types tested. Additional experiments using a more valuable source for investigations of protein degradation in smooth muscle without (major) influence by the gastrointestinal system (e.g., the tunica media of the aorta) and a larger sample size are necessary to test this. In routine forensic application, easy sampling of target tissue is a crucial aspect. In the present comparative research study, all muscles and muscle regions were easily identified and accordingly sampled during autopsy regardless of decomposition stage of the corpses. However, for other study designs such as studies investigating multiple samples from an individual muscle in field research (e.g., at a human forensic taphonomy facility) as well as sampling in a non-autopsy setting (e.g., at a crime scene), most of the muscles used in this study suffer from restrictions. In fact, only M. vastus lateralis and (with some limitations) M. temporalis can be considered for according research, as the sampling procedure would most likely have a manageable influence on the rest of the body. During sample preparation, no aberrations were detected. Different rigidity due to varying content of cytoskeletal elements and connective tissue which in turn could deteriorate the homogenization process and/or the obtainable protein content was not observed. Also on behalf of postmortem decomposition (especially in case C), protein concentration measurements revealed sufficient amounts in all samples indicating the applicability of the method also in advanced degradation stages. Postmortem degradation of muscle proteins is a highly conserved process within an individual muscle as well as (with varying rate) throughout different muscles and muscle types. Intramuscular variances are limited, supporting validity and replicability, whereas intermuscular differences offer a possibility to further improve the method. Large skeletal muscles of the limbs offer beneficial opportunities for research and are comparably easy to access (e.g., by muscle biopsy) also in scenarios where no autopsy is carried out. At the same time, other muscles have advantages on behalf of smaller interindividual variations. Analyzing specific muscles or a thoughtful combination of several muscles can ultimately improve both the precision and the temporal applicability for a reliable method to estimate the PMI. |
First report of the molecular detection of human pathogen | c0f4b393-30c2-4739-a550-38e2d4e296c1 | 8025568 | Pathology[mh] | Tick-borne diseases are a growing medical concern worldwide. Ticks are considered the main reservoirs and vectors of Rickettsia , an obligate intracellular bacteria, responsible for the transmission of rickettsial diseases to humans. The rickettsioses represent some of the oldest and most recently recognized infectious diseases . The causative agents belong to the genus Rickettsia and are presently classified into four groups: the spotted fever group (SFG), typhus group, Rickettsia bellii group, and Rickettsia canadensis group . SFG rickettsioses constitute newly identified Rickettsia species around the world. In the past few decades, numerous species of tick-borne rickettsiae, previously thought to be non-pathogenic, were recognized as human pathogens . In 1999, three novel rickettsial genotypes, RpA4, DnS14, and DnS28, were observed in ticks from Russia . Using genotypic and phenotypic analyses, these bacteria were recognized as novel species of SFG rickettsiae, and in 2008, the species was designated Rickettsia raoultii . The major clinical manifestations of R. raoultii infections include scalp eschar and neck lymphadenopathy. Initially, these were termed Dermacentor -borne necrosis erythema and lymphadenopathy or tick-borne lymphadenopathy . R. raoultii has been identified in many Asian and European countries . In 1999, Dermacentor nuttalli and Rhipicephalus pumilio ticks collected in the southern parts of the former Soviet Union were shown to harbor these bacteria ; thereafter, other species of Dermacentor ticks (i.e., D. reticulatus , D. marginatus, D. silvarum , and D. niveus ) from various parts of the former Soviet Union, as well as from France, Spain, and Germany, were also shown to carry these bacteria . Subsequently, R. raoultii was detected in other hard ticks too, such as Haemaphysalis , Rhipicephalus , Hyalomma , and Ambylomma , which are observed predominantly in Europe and Asia . The aim of this study was to determine the presence of R. raoultii in ticks and to assess the circulation of this pathogen in tick populations in the Republic of Korea (ROK). We found R. raoultii in Haemaphysalis longicornis ticks. To the best of our knowledge, this is the first report providing molecular evidence of R. raoultii in ticks from the ROK. Tick sampling and classification In 2018, 35 ticks were collected from 29 patients with a history of tick bites in Gwangju Metropolitan City, Jeollanam Province, ROK. Ticks were identified on the basis of their molecular, morphological, and standard taxonomic characteristics. Briefly, the ticks were first decontaminated using 70% ethanol, rinsed twice using sterile phosphate-buffered saline (PBS), and dried on sterile filter paper. Each sample was then placed in a hard-tissue-grinding MK28 tube (Bertin Technology, Rockville, MD, USA) containing 800 µl PBS and 1× PC/SM (i.e., penicillin and streptomycin). Subsequently, ticks were ground using a FastPrep-24 Classic instrument (MP Biomedicals, Solon, OH, USA) and were stored at − 80 °C until DNA extraction. DNA extraction Total genomic DNA was extracted from 150 µl of the tick homogenate and from 300 µl of whole blood of the respective patients using a QIAamp Tissue & Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions; the DNA was eluted in volumes of 50 µl and 100 µl, respectively. The samples were stored at − 20 °C until polymerase chain reaction (PCR) amplification. PCR amplification For molecular identification, tick genomic DNA was subjected to PCR amplification of a fragment of the mitochondrial 16S rRNA gene . To assess the presence of Rickettsia species in ticks and patients, genomic DNA samples were subjected to a nested PCR targeting the outer membrane protein A ( ompA ) and citrate synthase (gltA) genes . PCR primers and the respective product sizes are shown in Table . The reactions were carried out in a total volume of 20 µl, comprising 16 µl distilled water, 1 µl of each primer (10 pmol/µl), and 2 µl genomic DNA template using AccuPower PCR PreMix (Bioneer, Daejeon, ROK). The PCR analysis was performed using an AB thermal cycler (Applied Biosystems, Foster City, CA, USA). A positive control with R. conorii DNA and a negative control with distilled water instead of template DNA were included in each set of PCR. The amplified products were analyzed by electrophoresis using a 1.2% agarose gel containing ethidium bromide and then visualized by using an ultra-violet transilluminator system (FAS-III, Toyobo, Osaka, Japan). A 100-bp ladder (Bioneer Corp, Korea) was used as a molecular weight marker. Phylogenetic analysis The PCR products were purified using a QIAquick PCR purification kit (Qiagen) and were sequenced in both directions by a commercial service provider (Solgent Inc, Daejeon, Korea). To analyze the percentage of similarity, the resulting sequences were correlated for identity with sequences from GenBank using the Basic Local Alignment Search Tool (BLAST) program. The neighbor-joining method was employed to produce a phylogenetic tree with the ClustalW algorithm of the MegAlign program (DNASTAR, Madison, WI, USA). Bootstrap analysis was performed to test the stability of the phylogenetic tree acquired through the neighbor-joining method. In 2018, 35 ticks were collected from 29 patients with a history of tick bites in Gwangju Metropolitan City, Jeollanam Province, ROK. Ticks were identified on the basis of their molecular, morphological, and standard taxonomic characteristics. Briefly, the ticks were first decontaminated using 70% ethanol, rinsed twice using sterile phosphate-buffered saline (PBS), and dried on sterile filter paper. Each sample was then placed in a hard-tissue-grinding MK28 tube (Bertin Technology, Rockville, MD, USA) containing 800 µl PBS and 1× PC/SM (i.e., penicillin and streptomycin). Subsequently, ticks were ground using a FastPrep-24 Classic instrument (MP Biomedicals, Solon, OH, USA) and were stored at − 80 °C until DNA extraction. Total genomic DNA was extracted from 150 µl of the tick homogenate and from 300 µl of whole blood of the respective patients using a QIAamp Tissue & Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions; the DNA was eluted in volumes of 50 µl and 100 µl, respectively. The samples were stored at − 20 °C until polymerase chain reaction (PCR) amplification. For molecular identification, tick genomic DNA was subjected to PCR amplification of a fragment of the mitochondrial 16S rRNA gene . To assess the presence of Rickettsia species in ticks and patients, genomic DNA samples were subjected to a nested PCR targeting the outer membrane protein A ( ompA ) and citrate synthase (gltA) genes . PCR primers and the respective product sizes are shown in Table . The reactions were carried out in a total volume of 20 µl, comprising 16 µl distilled water, 1 µl of each primer (10 pmol/µl), and 2 µl genomic DNA template using AccuPower PCR PreMix (Bioneer, Daejeon, ROK). The PCR analysis was performed using an AB thermal cycler (Applied Biosystems, Foster City, CA, USA). A positive control with R. conorii DNA and a negative control with distilled water instead of template DNA were included in each set of PCR. The amplified products were analyzed by electrophoresis using a 1.2% agarose gel containing ethidium bromide and then visualized by using an ultra-violet transilluminator system (FAS-III, Toyobo, Osaka, Japan). A 100-bp ladder (Bioneer Corp, Korea) was used as a molecular weight marker. The PCR products were purified using a QIAquick PCR purification kit (Qiagen) and were sequenced in both directions by a commercial service provider (Solgent Inc, Daejeon, Korea). To analyze the percentage of similarity, the resulting sequences were correlated for identity with sequences from GenBank using the Basic Local Alignment Search Tool (BLAST) program. The neighbor-joining method was employed to produce a phylogenetic tree with the ClustalW algorithm of the MegAlign program (DNASTAR, Madison, WI, USA). Bootstrap analysis was performed to test the stability of the phylogenetic tree acquired through the neighbor-joining method. Molecular, morphological, and taxonomic characteristics revealed that 4 of the 35 ticks were Ixodes nipponensis , 14 were Amblyomma testudinarium , and 17 were H. longicornis (Table ). PCR tests to amplify the ompA and gltA gene fragments for identification of SFG rickettsial disease agents were conducted on all 35 ticks. Sequencing data of the amplified ompA gene fragment revealed one distinct Rickettsia species in three H. longicornis ticks, which was identified as R. raoultii. Morphological and taxonomic characteristic showed that these ticks were adult females. The PCR targeting the gltA gene did not reveal any distinct Rickettsia species. Even though the three R. raoultii -positive ticks were collected from patients, blood samples from the respective three patients did not show R. raoultii infection, as assessed by PCR, nor did the patients show any symptoms suggesting infection by this pathogen. R. raoultii ompA sequences of all three tick samples were 99.4%–100% homologous to previously reported partial sequences of ompA from R. raoultii IM-16 strains isolated from CP019435 and MF002523. Using phylogenetic analyses, a neighbor-joining tree of Rickettsia species indicated that isolates of the current study belonged to a single clade with R. raoultii reference strains (Fig. ). The bootstrap analyses statistically supported the main clustered sequence. Currently, 14 distinct tick species from 6 genera including Dermacentor ( D. nuttalli, D. reticulatus , D. silvarum , and D. marginatus ), Ambylomma ( A. helvolum ), Haemaphysalis ( H. concinna , H. japonica , H. erinacei , and H. longicornis ), Hyalomma ( Hy. asiaticum and Hy. lusitanicum ), Ixodes ( I. persulcatus and I. ricinus ), and Rhipicephalus ( Rh. pumilio and Rh. turanicus ) have shown the presence of R. raoultii . This bacterium was also found in Melophagus ovinus , a louse fly or sheep ked . Dermacentor ticks are considered the main hosts and natural reservoirs of R. raoultii all over Europe and in a few countries in Asia, including China and Mongolia . In the ROK, the first evidence of SFG rickettsiae in ticks was reported in 2003, followed by the first case of SFG rickettsiosis (Japanese spotted fever) in a patient in 2005 . Over a period of 16 years, various species of SFG rickettsiae have been identified in ticks ( R. japonica , R. monacensis , and R. rickettsii) and humans ( R. japonica and R. monacensis ) in the ROK . Thus far, R. raoultii had not been identified in this region, and the present study reports its detection for the first time. Previously, only one study from China indicated the presence of R. raoultii in H. longicornis ticks . Different strains of R. raoultii including Marne, 8/9 Karaganda, Khabarovask T , Shayman, and IM 16 have been documented in Europe, Russia, and China . The phylogenetic tree produced in the current study showed that the positive samples formed a distinct clade at a high (100) bootstrap value with R. raoultii strain IM 16 from China. DNA sequences of R. japonica , R. monacensis , and R. rickettsii have been identified in H. longicornis from ROK . According to a recent study, H. longicornis is the most prevalent tick species in the ROK (88.9%), showing a nationwide distribution . Despite its identification in multiple tick species, reports on human infections with R. raoultii are still lacking. Infections of patients with R. raoultii have been reported in Europe, the Far East of Russia, and a few cases in China . Another study from China identified R. raoultii DNA in clinical samples, apart from positive serological reports in patients from other countries. Based on these findings, R. raoultii is considered a human pathogen . The observations of the current study indicate the presence of R. raoultii in ticks in the ROK, which warrants further research. Our results provide evidence for the first instance to our knowledge of the identification of R. raoultii in ticks from the ROK. Detection of Rickettsia species in H. longicornis ticks suggests that these ticks may be a vector of this pathogen in the ROK. This observation broadens our knowledge of the geographical distribution of R. raoultii . Even though no human clinical infection was observed, the high pathogenicity of this bacterium is a major concern for public health in this region. Further extensive research in a broader range of ticks and surveillance programs in this regard are therefore required. |
Enhancing medical education for undergraduates: integrating virtual reality and case-based learning for shoulder joint | b3d4f8e0-8b15-4e94-9e0c-8e576ddd84ac | 11460170 | Anatomy[mh] | Virtual Reality (VR) has emerged as a rapidly advancing technology with particularly profound implications for medical applications. The integration of digital technology into the medical field promises to revolutionise how educational content is delivered and received. With its potential to offer immersive, interactive learning experiences, VR technology stands at the forefront of educational innovation . As digital technologies progress at an unprecedented rate, they drive significant refinements in our educational paradigms. Traditional teaching methods are being redefined and enhanced, paving the way for more sophisticated and effective educational models . VR, in particular, with its advanced motion tracking and superior imaging capabilities, creates a fully digitised environment where users can engage with content in a highly interactive and impactful manner. This technological innovation is not only reshaping entertainment and social interaction; it is also transforming the landscape of medical training and therapy, presenting new possibilities for both educators and learners . Makransky and colleagues have been instrumental in synthesising existing studies into a coherent theoretical framework that explores how VR can enhance learning outcomes . Their Cognitive Affective Model of Immersive Learning (CAMIL) identifies presence and agency as key factors facilitated by immersion and control. The model highlights six factors—interest, motivation, self-efficacy, embodiment, cognitive load, and self-regulation—that contribute to effective learning and knowledge transfer in immersive virtual reality environments. Their work identifies key psychological support mechanisms within VR environments and assesses their impact on educational effectiveness. This framework has significant implications for the design of pedagogical strategies and future studies, suggesting ways in which VR can be optimally utilised to support and enhance learning experiences . The introduction of VR in medical education has initiated revolutionary changes, significantly enhancing student engagement, comprehension, and preparedness for clinical practice. By providing an immersive and detailed understanding of complex medical procedures, VR allows students to acquire essential skills in a controlled and interactive environment . Advantages of virtual reality learning VR technology exhibits exceptional capability in creating immersive and interactive learning environments, and it has been extensively implemented in the field of medical education and training . This technology is recognized as an effective instructional method, due to its high levels of system usability and learner satisfaction . VR technology provides a robust and standardized platform for clinical practice and procedural instruction across various disciplines, including internal medicine, surgery, rheumatology, ophthalmology, psychiatry, medical engineering integration, and biopharmaceuticals . As an alternative to conventional anatomical training approaches, VR significantly reduces the risks associated with the complexities of human anatomy and the unpredictability of patient interactions . Although VR does not notably decrease the time required to complete assessments, it has been shown to enhance test scores, satisfaction levels, and enjoyment within anatomical education . Furthermore, virtual reality learning environments create new opportunities for supporting learners and improving the learning process . The study by Andreasen et al. demonstrates that students using VR technology to practice the ISBAR communication technique achieve superior educational outcomes compared to traditional paper-based methods . In the context of cardiopulmonary resuscitation training, the application of VR technology significantly enhances students' performance and satisfaction . Advantages of case-based learning In clinical settings for medical students, Case-Based Learning (CBL) serves as an effective strategy for teaching basic science subjects . Fink and his colleagues indicate that interactions with real patients during CBL are perceived as more authentic compared to virtual simulations, leading to higher diagnostic accuracy . Furthermore, while VR increases the sense of presence, it also elevates cognitive load, which may diminish learning outcomes. Electroencephalogram (EEG) measurements have demonstrated that this increased cognitive demand can reduce the effectiveness of learning . Additionally, VR's reliance on visual-spatial abilities can pose a limitation for students with weaker skills in this area. For these students, navigating and interacting within a 3D VR space can be challenging, potentially hindering their understanding of complex subjects like anatomy. In contrast, CBL provides a consistent learning experience across varying visual-spatial abilities, ensuring that all students, regardless of their inherent skills, can engage effectively with the educational content . Therefore, while VR offers significant advantages in terms of engagement and interactivity, it is essential to balance its use with traditional methods like CBL, which provide high diagnostic accuracy and equitable learning opportunities. Integration of virtual reality with case-based learning In pursuit of nurturing highly skilled and refined medical professionals, educators utilize various pedagogical strategies to elevate student engagement, knowledge retention, and practical capabilities . The outbreak of COVID-19 necessitated a pivot to online modalities, compelling the conversion of conventional face-to-face instruction and laboratory classes to remote formats . Within the domain of medical education, there is an emerging exploration into the synergistic application of Virtual Reality and Case-Based Learning, aimed at providing students with tailored training scenarios that bridge theoretical knowledge with clinical practice experience . This amalgamation offers distinct advantages over traditional, singular teaching approaches, particularly in the cultivation of teamwork skills. The integration not only bolsters basic recall capabilities but, although it may augment the associated cognitive load, it leaves deeper cognitive processing unaffected . Moreover, the fusion of VR and CBL creates a risk-free simulated environment, allowing students to repetitively practice complex medical procedures until proficiency is achieved, without diminishing educational quality . For instance, Wainman et al. demonstrate that combining traditional teaching methods with pre-class online learning and practical training significantly enhances team performance. In conclusion, the educational methodology integrating VR and CBL merits further investigation . Study and hypothesis This investigation aims to elucidate the collaborative effects of Virtual Reality and Case-Based Learning in the domain of medical education, specifically regarding their potential to augment student engagement, enhance knowledge retention, and facilitate the development of practical skills. To achieve this, we have initiated a randomized controlled trial to assess the distinctions between a combined VR and CBL model and a traditional CBL approach within medical education. This evaluation focuses on students' acceptance of differing pedagogical strategies and their rates of knowledge retention under each method. This study examined various dimensions such as teaching methods, cognitive load, teacher-student interactions, skill enhancement, and overall satisfaction, all grounded in established educational theories . In summary, we posit the following hypothesis: the integration of VR with CBL will significantly elevate student engagement and knowledge retention when contrasted with the traditional CBL model . VR technology exhibits exceptional capability in creating immersive and interactive learning environments, and it has been extensively implemented in the field of medical education and training . This technology is recognized as an effective instructional method, due to its high levels of system usability and learner satisfaction . VR technology provides a robust and standardized platform for clinical practice and procedural instruction across various disciplines, including internal medicine, surgery, rheumatology, ophthalmology, psychiatry, medical engineering integration, and biopharmaceuticals . As an alternative to conventional anatomical training approaches, VR significantly reduces the risks associated with the complexities of human anatomy and the unpredictability of patient interactions . Although VR does not notably decrease the time required to complete assessments, it has been shown to enhance test scores, satisfaction levels, and enjoyment within anatomical education . Furthermore, virtual reality learning environments create new opportunities for supporting learners and improving the learning process . The study by Andreasen et al. demonstrates that students using VR technology to practice the ISBAR communication technique achieve superior educational outcomes compared to traditional paper-based methods . In the context of cardiopulmonary resuscitation training, the application of VR technology significantly enhances students' performance and satisfaction . In clinical settings for medical students, Case-Based Learning (CBL) serves as an effective strategy for teaching basic science subjects . Fink and his colleagues indicate that interactions with real patients during CBL are perceived as more authentic compared to virtual simulations, leading to higher diagnostic accuracy . Furthermore, while VR increases the sense of presence, it also elevates cognitive load, which may diminish learning outcomes. Electroencephalogram (EEG) measurements have demonstrated that this increased cognitive demand can reduce the effectiveness of learning . Additionally, VR's reliance on visual-spatial abilities can pose a limitation for students with weaker skills in this area. For these students, navigating and interacting within a 3D VR space can be challenging, potentially hindering their understanding of complex subjects like anatomy. In contrast, CBL provides a consistent learning experience across varying visual-spatial abilities, ensuring that all students, regardless of their inherent skills, can engage effectively with the educational content . Therefore, while VR offers significant advantages in terms of engagement and interactivity, it is essential to balance its use with traditional methods like CBL, which provide high diagnostic accuracy and equitable learning opportunities. In pursuit of nurturing highly skilled and refined medical professionals, educators utilize various pedagogical strategies to elevate student engagement, knowledge retention, and practical capabilities . The outbreak of COVID-19 necessitated a pivot to online modalities, compelling the conversion of conventional face-to-face instruction and laboratory classes to remote formats . Within the domain of medical education, there is an emerging exploration into the synergistic application of Virtual Reality and Case-Based Learning, aimed at providing students with tailored training scenarios that bridge theoretical knowledge with clinical practice experience . This amalgamation offers distinct advantages over traditional, singular teaching approaches, particularly in the cultivation of teamwork skills. The integration not only bolsters basic recall capabilities but, although it may augment the associated cognitive load, it leaves deeper cognitive processing unaffected . Moreover, the fusion of VR and CBL creates a risk-free simulated environment, allowing students to repetitively practice complex medical procedures until proficiency is achieved, without diminishing educational quality . For instance, Wainman et al. demonstrate that combining traditional teaching methods with pre-class online learning and practical training significantly enhances team performance. In conclusion, the educational methodology integrating VR and CBL merits further investigation . This investigation aims to elucidate the collaborative effects of Virtual Reality and Case-Based Learning in the domain of medical education, specifically regarding their potential to augment student engagement, enhance knowledge retention, and facilitate the development of practical skills. To achieve this, we have initiated a randomized controlled trial to assess the distinctions between a combined VR and CBL model and a traditional CBL approach within medical education. This evaluation focuses on students' acceptance of differing pedagogical strategies and their rates of knowledge retention under each method. This study examined various dimensions such as teaching methods, cognitive load, teacher-student interactions, skill enhancement, and overall satisfaction, all grounded in established educational theories . In summary, we posit the following hypothesis: the integration of VR with CBL will significantly elevate student engagement and knowledge retention when contrasted with the traditional CBL model . Trial design This study employed a parallel-group, assessor-blinded randomized controlled trial (RCT) design. The research meticulously arranged a comprehensive five-week educational plan, which included a course every two weeks, each lasting two hours, for a total of 20 teaching hours. The course setup was sequential and rigorously structured: the first week served as an introductory week to the basic theories of the shoulder joint; the following two weeks focused on detailed anatomical studies of the shoulder joint; and the last two weeks were dedicated to intensive rehabilitation physiotherapy of the shoulder joint (Fig. ). The initial theoretical teaching was conducted through traditional Lecture-Based Learning (LBL). In contrast, the subsequent anatomy and rehabilitation physiotherapy modules employed CBL, based on eight carefully selected real cases of shoulder joint injuries. These cases were chosen to encompass a wide range of common pathological conditions and anatomical details. The course design emphasised four key areas: the anatomy of the shoulder joint, injury characteristics, radiological assessment and diagnostic techniques, and the formulation of personalised rehabilitation strategies. This study received ethical approval from the Ethics Committee of Shengjing Hospital of China Medical University, and all participants provided written informed consent. Participants This study enlisted 82 third-year students from the Rehabilitation program at China Medical University, who had not yet undergone clinical internships or been exposed to complex shoulder joint rehabilitation courses. These students were considered ideal candidates for participation due to their lack of prior exposure to clinical settings and intricate rehabilitation protocols related to shoulder joints. Furthermore, they exhibited similar educational backgrounds and foundational knowledge, thereby minimising potential external influences on the study outcomes. The random allocation of these participants into different study groups ensured the reliability and validity of the research findings. Throughout the study, they underwent rigorous course scheduling and training to assess the impact of the educational program on their knowledge and skills pertaining to shoulder joint rehabilitation. Active participation from these individuals during the study period, combined with their valuable feedback post-course completion, provided deeper insights and understanding into the outcomes of this study. Interventions This study deployed a multifaceted teaching intervention among third-year undergraduates in the Rehabilitation Therapy programme at China Medical University, with the objective of enhancing student engagement, improving knowledge retention, and advancing the development of practical skills . This study employed a parallel-group, assessor-blinded randomized controlled trial (RCT) design. The research meticulously arranged a comprehensive five-week educational plan, which included a course every two weeks, each lasting two hours, for a total of 20 teaching hours. The course setup was sequential and rigorously structured: the first week served as an introductory week to the basic theories of the shoulder joint; the following two weeks focused on detailed anatomical studies of the shoulder joint; and the last two weeks were dedicated to intensive rehabilitation physiotherapy of the shoulder joint (Fig. ). The initial theoretical teaching was conducted through traditional Lecture-Based Learning (LBL). In contrast, the subsequent anatomy and rehabilitation physiotherapy modules employed CBL, based on eight carefully selected real cases of shoulder joint injuries. These cases were chosen to encompass a wide range of common pathological conditions and anatomical details. The course design emphasised four key areas: the anatomy of the shoulder joint, injury characteristics, radiological assessment and diagnostic techniques, and the formulation of personalised rehabilitation strategies. This study received ethical approval from the Ethics Committee of Shengjing Hospital of China Medical University, and all participants provided written informed consent. This study enlisted 82 third-year students from the Rehabilitation program at China Medical University, who had not yet undergone clinical internships or been exposed to complex shoulder joint rehabilitation courses. These students were considered ideal candidates for participation due to their lack of prior exposure to clinical settings and intricate rehabilitation protocols related to shoulder joints. Furthermore, they exhibited similar educational backgrounds and foundational knowledge, thereby minimising potential external influences on the study outcomes. The random allocation of these participants into different study groups ensured the reliability and validity of the research findings. Throughout the study, they underwent rigorous course scheduling and training to assess the impact of the educational program on their knowledge and skills pertaining to shoulder joint rehabilitation. Active participation from these individuals during the study period, combined with their valuable feedback post-course completion, provided deeper insights and understanding into the outcomes of this study. This study deployed a multifaceted teaching intervention among third-year undergraduates in the Rehabilitation Therapy programme at China Medical University, with the objective of enhancing student engagement, improving knowledge retention, and advancing the development of practical skills . Case-based learning To prepare for each Case-Based Learning course, teachers carefully selected a representative clinical case that met the course requirements and organised its medical history, test results, and imaging data. These materials were distributed to students two days before the course, allowing them to prepare in advance. In this study, both the Virtual Reality combined with Case-Based Learning group (VR + CBL) and the Case-Based Learning group (CBL) dealt with eight complete and real cases of shoulder joint diseases, including frozen shoulder, rotator cuff tear, subacromial impingement syndrome, shoulder dislocation, calcific tendinitis of the supraspinatus tendon, long head of the biceps tendinitis, acromioclavicular joint dislocation, and suprascapular nerve entrapment. Integration of virtual reality with case-based learning The combined VR experiment group used VR equipment for teaching while conducting case-based learning. After the group report and teacher summary, students entered an interactive computer simulation environment. While the teacher explained relevant knowledge points, students directly observed real anatomical structures in the simulation environment and personally operated treatment methods, experiencing the entire process of diagnosing and treating shoulder joint injuries. In the functional anatomy part of VR, students used the angle, axial, and temporal tools located in the lower right corner of the computer screen to learn the range of motion for shoulder flexion, extension, adduction, abduction, internal rotation, external rotation, and internal and external rotation in the abducted position. They also studied the biomechanical changes in the scapula, clavicle, sternum, and chest wall during these activities, the rhythm of the glenohumeral joint, and the anatomical starting and ending points of the muscles around the shoulder joint, including the rotator cuff, as well as the timing of their involvement in movement, guided by interactive step prompts. In the rehabilitation treatment part, based on the aforementioned test results and the shoulder joint function scale provided by human–computer interaction, a rehabilitation treatment plan was formulated. This plan included setting rehabilitation goals, designing rehabilitation training movements, and learning various rehabilitation training methods, such as pendulum exercises, ball control, standing and prone push-ups, glenohumeral joint sliding, and shoulder mobilization techniques (Fig. ). VR equipment information VR software names: virtual reality training system for shoulder joint functional anatomy and movement principles V 1.0 (NO. 2023SR1418086) and virtual reality training system for the rehabilitation and treatment of rotator cuff injuries V1.0 (NO. 2023SR1418090). Computer hardware requirements: CPU: 3.5 GHz, RAM: 16 GB, GPU Memory: 8 GB, Storage Capacity: 500 GB. Operating system and version: Windows 10. Network bandwidth requirements: bandwidth ≥ 200 Mbps. Head-mounted display name: The HTC VIVE—Pro professional edition basic set. Head-mounted display information: The basic set includes two base stations (1.0) and two controllers (1.0). The head-mounted display uses SteamVR™ tracking (2.0) and features Hi-Res Audio certified built-in headphones. It also has a built-in microphone and an adjustable interpupillary distance (IPD) for optimal comfort. To prepare for each Case-Based Learning course, teachers carefully selected a representative clinical case that met the course requirements and organised its medical history, test results, and imaging data. These materials were distributed to students two days before the course, allowing them to prepare in advance. In this study, both the Virtual Reality combined with Case-Based Learning group (VR + CBL) and the Case-Based Learning group (CBL) dealt with eight complete and real cases of shoulder joint diseases, including frozen shoulder, rotator cuff tear, subacromial impingement syndrome, shoulder dislocation, calcific tendinitis of the supraspinatus tendon, long head of the biceps tendinitis, acromioclavicular joint dislocation, and suprascapular nerve entrapment. The combined VR experiment group used VR equipment for teaching while conducting case-based learning. After the group report and teacher summary, students entered an interactive computer simulation environment. While the teacher explained relevant knowledge points, students directly observed real anatomical structures in the simulation environment and personally operated treatment methods, experiencing the entire process of diagnosing and treating shoulder joint injuries. In the functional anatomy part of VR, students used the angle, axial, and temporal tools located in the lower right corner of the computer screen to learn the range of motion for shoulder flexion, extension, adduction, abduction, internal rotation, external rotation, and internal and external rotation in the abducted position. They also studied the biomechanical changes in the scapula, clavicle, sternum, and chest wall during these activities, the rhythm of the glenohumeral joint, and the anatomical starting and ending points of the muscles around the shoulder joint, including the rotator cuff, as well as the timing of their involvement in movement, guided by interactive step prompts. In the rehabilitation treatment part, based on the aforementioned test results and the shoulder joint function scale provided by human–computer interaction, a rehabilitation treatment plan was formulated. This plan included setting rehabilitation goals, designing rehabilitation training movements, and learning various rehabilitation training methods, such as pendulum exercises, ball control, standing and prone push-ups, glenohumeral joint sliding, and shoulder mobilization techniques (Fig. ). VR software names: virtual reality training system for shoulder joint functional anatomy and movement principles V 1.0 (NO. 2023SR1418086) and virtual reality training system for the rehabilitation and treatment of rotator cuff injuries V1.0 (NO. 2023SR1418090). Computer hardware requirements: CPU: 3.5 GHz, RAM: 16 GB, GPU Memory: 8 GB, Storage Capacity: 500 GB. Operating system and version: Windows 10. Network bandwidth requirements: bandwidth ≥ 200 Mbps. Head-mounted display name: The HTC VIVE—Pro professional edition basic set. Head-mounted display information: The basic set includes two base stations (1.0) and two controllers (1.0). The head-mounted display uses SteamVR™ tracking (2.0) and features Hi-Res Audio certified built-in headphones. It also has a built-in microphone and an adjustable interpupillary distance (IPD) for optimal comfort. Following the completion of the educational interventions, this study anticipates improvements in students' knowledge retention rates, proficiency in acquired skills, and elevated levels of classroom participation and satisfaction. The CBL group employed the nationally standardised textbook, ‘Musculoskeletal Rehabilitation’, along with eight authentic clinical cases of shoulder joint injuries for instructional purposes. In addition to the resources used by the CBL group, the VR + CBL group integrated a virtual reality simulation system and VR equipment into their teaching approach. Students were designated as Group A ( n = 20), Group B ( n = 21), Group C ( n = 21), and Group D ( n = 20). Group A received CBL teaching assisted by VR technology, Group D underwent traditional CBL teaching, Group B used VR technology assistance in anatomy courses only, and Group C applied VR technology assistance in physiotherapy courses only. They first completed a one-week basic theoretical course on the shoulder joint, which included two courses held on Tuesday and Thursday, each lasting two academic hours. This was followed by a theoretical test (T0), the results of which were used for subsequent analysis. The test was a written test lasting 45 min, with a total score of 100 points. The anatomy-related courses on the shoulder joint lasted for two weeks, with two courses each week (Tuesday and Thursday), totalling four courses, each lasting two academic hours. To prevent bias from different lecturers and time slots, each class analysed a case taught by the same teacher. In the first week, the morning session on the first day was allocated to Groups A and B, while the afternoon session was for Groups C and D. On the second day, the morning session was for Groups C and D, and the afternoon session for Groups A and B. The teaching sequence in the second week mirrored that of the first week. After the course, a theoretical test (T1) was conducted to evaluate the learning outcomes. This test was a written test lasting 45 min, with a total score of 100 points. The rehabilitation physiotherapy-related courses on the shoulder joint also lasted for two weeks, with two courses per week (Tuesday and Thursday), totalling four courses, each lasting two academic hours. The scheduling order of the rehabilitation courses was identical to that of the anatomy courses. After the course, a test (T2) was conducted, which included a written test and a practical operation, with a total score of 100 points. The written test lasted 45 min and was worth 40 points, while the practical operation lasted three to five minutes per person and was worth 60 points. The final test (T3) was conducted at the end of the semester, lasting 120 min, with a total written test score of 100 points. The scores for the shoulder joint-related questions, which accounted for 22%, were analysed. After T3, the statistical results of T0, T1, and T2 were announced to all students without compromising personal privacy. Each student's scores were shared individually, and a questionnaire survey was conducted. In both the traditional CBL group and the VR + CBL group, students were divided into four teams, each comprising five to six members, to promote collaborative learning. Each team was tasked with reviewing relevant case materials, conducting research on the latest academic articles to solidify their knowledge foundation, and preparing a concise report. During the course sessions, each team was allotted five minutes to present their insights and pose questions. Following each team's presentation, the teacher provided comprehensive feedback, offered their own perspectives, and addressed students' queries, with approximately five minutes dedicated to question-and-answer sessions for each team. After all team presentations, the teacher delivered a case exposition, introducing key knowledge points and diagnostic and treatment methodologies. In the traditional CBL group, the teacher played relevant videos to enhance students' understanding of the case. Students emulated the actions depicted in the videos and demonstrated them within their groups while the teacher offered guidance. In the VR experimental group, students utilized VR equipment for more immersive learning to gain a deeper understanding of the case. The teacher initially demonstrated, after which students engaged in practical operations, with the teacher providing guidance. Each team was equipped with a set of VR devices, and team members used them in rotation, with each individual having five to ten minutes of operation time. Through the VR devices, they could directly observe anatomical structures, execute treatment procedures, and simulate the entire diagnostic and treatment process. Upon completing these processes, teachers in both the traditional CBL group and the VR + CBL group recapitulated the course content, further clarified students' queries, and assigned pertinent homework to reinforce learning outcomes. In this study, no incentives / reimbursements were provided to participants. The teaching was conducted by a team of experienced professional educators: the basic theory component was taught by a professor with 27 years of experience in medical education; the anatomy module was led by an anatomy expert with 16 years of teaching expertise; and the rehabilitation physiotherapy segment was conducted by a Master of Physical Therapy with 12 years of clinical practice experience. The CBL group employed a traditional face-to-face interactive teaching model, while the VR + CBL group utilised both face-to-face and virtual simulation modalities. The same course was delivered by the same instructor to both groups (which were subjected to identical interventions). The student-to-teacher ratio was either 40:1, 41:1, or 42:1. Foundational theory classes were conducted in a lecture room with all students participating together. Both the anatomy-related courses and the rehabilitation therapy-related courses were held in the same classroom, regardless of whether VR was used. To minimise bias, groups undergoing different interventions attended classes at different times, with their schedules alternating accordingly. This study meticulously arranged a five-week comprehensive educational plan, consisting of a course every two weeks, each lasting two hours, totalling 20 teaching hours. The course structure was sequential and rigorously organised: the first week served as an introductory week to the basic theories of the shoulder joint; the following two weeks focused on detailed anatomical studies of the shoulder joint; and the last two weeks were dedicated to intensive rehabilitation physiotherapy for the shoulder joint. In this study, the various educational methods employed did not require learners to possess specific adaptive capabilities, nor were there any alterations made to the educational methods specifically for the research. Attendance was managed by the teaching staff, who conducted roll calls at the beginning of each course. As these were compulsory courses within the programme, all students attended, resulting in a 100% attendance rate. The materials and educational strategies used in the interventions were delivered as originally planned, and all interventions were conducted on schedule. Outcomes Knowledge retention assessment At the conclusion of each section of the course, a knowledge assessment was conducted to evaluate the students' understanding of the material. These assessments were independently created by the instructors and comprised 100 multiple-choice questions (each with five options), with a maximum score of 100 points. The duration of the test was 120 min. To minimise experimental bias and interference, the knowledge tests were proctored by a teacher who was not informed about the study. This teacher also reviewed and tabulated the results. In the final test, scores related to shoulder joint issues were specifically analysed, consisting of 22 multiple-choice questions (each with five options), with a maximum score of 22 points. Anonymous questionnaire survey An anonymous questionnaire survey was conducted to evaluate students' views on the course structure, content practicality, teaching methods, teacher-student interaction, assignments and feedback, learning resources, study load, skill improvement, course recommendation, and overall satisfaction . The scoring range for each indicator in the questionnaire was from 1 to 5, with 1 being 'Strongly Disagree,' 2 being 'Disagree,' 3 being 'Neutral,' 4 being 'Agree,' and 5 being 'Strongly Agree. The student questionnaire exhibited high reliability, with a Cronbach's alpha exceeding 0.8. Sample size The sample size for this study was determined based on the research by Falahan et al., with a confidence level set at 95%, a test power of 80%, and an effect size of 0.65 . Using G*Power (version 3.1.9.7), the total required sample size was calculated to be 78 participants. A total of 82 students were available and willing to participate, all of whom signed informed consent forms. Therefore, there were 82 participants in total, meeting the requirements outlined in the study design. Randomization Using a random number table method, the 82 students were divided into four groups, each comprising either 20 or 21 participants. To minimise experimental bias and interference, the random allocation was conducted by a teacher who was not informed about the specifics of this study. The groups were designated as Group A ( n = 20), Group B ( n = 21), Group C ( n = 21), and Group D ( n = 20). Group A received CBL teaching assisted by VR technology, Group D underwent traditional CBL teaching, Group B received VR technology assistance in anatomy courses only, and Group C applied VR technology assistance in physiotherapy courses only. Blinding Students were partially blinded in the study as they were assigned numbers without being informed of their significance. To ensure educational equity, the same instructor taught all groups, resulting in the instructors not being blinded. To reduce errors and bias, the teacher responsible for statistical analysis was blinded to the group assignments. Statistical methods Data analysis commenced with the Shapiro–Wilk test to ascertain the normality of continuous variables, setting the stage for appropriate statistical testing. For normally distributed data, comparisons were conducted using independent sample t-tests, one-way Analysis of Variance (ANOVA), and chi-square tests. Alternatively, non-normally distributed data were analyzed using Mann–Whitney and Kruskal–Wallis H tests, adhering to a statistical significance threshold of P < 0.05. The reliability of Likert scale responses was assessed via the Cronbach's alpha coefficient, ensuring the internal consistency of survey instruments. All statistical procedures were executed in SPSS software (Version 29.0). Knowledge retention assessment At the conclusion of each section of the course, a knowledge assessment was conducted to evaluate the students' understanding of the material. These assessments were independently created by the instructors and comprised 100 multiple-choice questions (each with five options), with a maximum score of 100 points. The duration of the test was 120 min. To minimise experimental bias and interference, the knowledge tests were proctored by a teacher who was not informed about the study. This teacher also reviewed and tabulated the results. In the final test, scores related to shoulder joint issues were specifically analysed, consisting of 22 multiple-choice questions (each with five options), with a maximum score of 22 points. Anonymous questionnaire survey An anonymous questionnaire survey was conducted to evaluate students' views on the course structure, content practicality, teaching methods, teacher-student interaction, assignments and feedback, learning resources, study load, skill improvement, course recommendation, and overall satisfaction . The scoring range for each indicator in the questionnaire was from 1 to 5, with 1 being 'Strongly Disagree,' 2 being 'Disagree,' 3 being 'Neutral,' 4 being 'Agree,' and 5 being 'Strongly Agree. The student questionnaire exhibited high reliability, with a Cronbach's alpha exceeding 0.8. At the conclusion of each section of the course, a knowledge assessment was conducted to evaluate the students' understanding of the material. These assessments were independently created by the instructors and comprised 100 multiple-choice questions (each with five options), with a maximum score of 100 points. The duration of the test was 120 min. To minimise experimental bias and interference, the knowledge tests were proctored by a teacher who was not informed about the study. This teacher also reviewed and tabulated the results. In the final test, scores related to shoulder joint issues were specifically analysed, consisting of 22 multiple-choice questions (each with five options), with a maximum score of 22 points. An anonymous questionnaire survey was conducted to evaluate students' views on the course structure, content practicality, teaching methods, teacher-student interaction, assignments and feedback, learning resources, study load, skill improvement, course recommendation, and overall satisfaction . The scoring range for each indicator in the questionnaire was from 1 to 5, with 1 being 'Strongly Disagree,' 2 being 'Disagree,' 3 being 'Neutral,' 4 being 'Agree,' and 5 being 'Strongly Agree. The student questionnaire exhibited high reliability, with a Cronbach's alpha exceeding 0.8. The sample size for this study was determined based on the research by Falahan et al., with a confidence level set at 95%, a test power of 80%, and an effect size of 0.65 . Using G*Power (version 3.1.9.7), the total required sample size was calculated to be 78 participants. A total of 82 students were available and willing to participate, all of whom signed informed consent forms. Therefore, there were 82 participants in total, meeting the requirements outlined in the study design. Using a random number table method, the 82 students were divided into four groups, each comprising either 20 or 21 participants. To minimise experimental bias and interference, the random allocation was conducted by a teacher who was not informed about the specifics of this study. The groups were designated as Group A ( n = 20), Group B ( n = 21), Group C ( n = 21), and Group D ( n = 20). Group A received CBL teaching assisted by VR technology, Group D underwent traditional CBL teaching, Group B received VR technology assistance in anatomy courses only, and Group C applied VR technology assistance in physiotherapy courses only. Students were partially blinded in the study as they were assigned numbers without being informed of their significance. To ensure educational equity, the same instructor taught all groups, resulting in the instructors not being blinded. To reduce errors and bias, the teacher responsible for statistical analysis was blinded to the group assignments. Data analysis commenced with the Shapiro–Wilk test to ascertain the normality of continuous variables, setting the stage for appropriate statistical testing. For normally distributed data, comparisons were conducted using independent sample t-tests, one-way Analysis of Variance (ANOVA), and chi-square tests. Alternatively, non-normally distributed data were analyzed using Mann–Whitney and Kruskal–Wallis H tests, adhering to a statistical significance threshold of P < 0.05. The reliability of Likert scale responses was assessed via the Cronbach's alpha coefficient, ensuring the internal consistency of survey instruments. All statistical procedures were executed in SPSS software (Version 29.0). Baseline characteristics Table presents the demographic data of the undergraduates participating in this study. There were no significant differences between Groups A, B, C, and D in terms of age, gender, and theoretical test scores ( P > 0.05). The mean age of all participants was 20.43 years, with a standard deviation of 0.65. Outcomes and data analysis Test scores evaluation In the theoretical tests, there were no significant differences in the scores of students between groups ( P > 0.05), indicating comparability in statistical terms. Overall, compared to the pure CBL teaching method, VR + CBL demonstrated a significant advantage in the test scores for the anatomy course, although there was no statistical difference in the treatment course test scores (Table ). Specific analysis: In the anatomy course, the groups using VR + CBL (Groups A and B) outperformed those using CBL (Groups C and D), demonstrating a significant effect of VR. In Test 1, Group A achieved superior results compared to Group B, which in turn surpassed Group D, while Group D marginally exceeded Group C in their overall scores. There was no statistically significant difference in scores between Group A and Group B ( P > 0.05), nor between Group C and Group D ( P > 0.05). However, the scores of Group A were significantly higher than those of Groups C and D ( P < 0.05), and the scores of Group B were also significantly higher than those of Groups C and D ( P < 0.05) (Table ). In the rehabilitation therapy course, Group A, which consistently used VR + CBL, achieved the highest scores, while Group D, which always used CBL, scored the lowest. Interestingly, the scores of Group C, which also employed VR + CBL, were lower than those of Group B, which used CBL. The analysis revealed a hierarchical score across the groups, with Group A leading, followed sequentially by Group B, Group C, and Group D, reflecting a gradation in achievement from highest to lowest. The scores of Group A were significantly higher than those of Group D ( P < 0.05), but the differences in scores between the other groups were not statistically significant ( P > 0.05) (Table ). In the final test, the results demonstrated a clear sequence in score rankings, with Group A leading, followed by Group C, Group B, and Group D in descending order. The difference in scores between Group A and Group D was statistically significant ( P < 0.05), while the differences between Group B and Group D, as well as between Group C and Group D, were not statistically significant ( P > 0.05) (Table ). Student questionnaire evaluation After the final test, student questionnaires were distributed and collected, resulting in 82 returned questionnaires and a 100% response rate. Analysis using the Kruskal–Wallis H test revealed significant differences ( P < 0.05) among groups in six key areas: teaching methods, teacher-student interaction, learning resources, skill improvement, course recommendation, and overall satisfaction (Fig. ). On the other hand, there were no significant differences ( P > 0.05) regarding course structure clarity, content practicality, assignments and feedback, and learning burden (Table ). Table presents the demographic data of the undergraduates participating in this study. There were no significant differences between Groups A, B, C, and D in terms of age, gender, and theoretical test scores ( P > 0.05). The mean age of all participants was 20.43 years, with a standard deviation of 0.65. Test scores evaluation In the theoretical tests, there were no significant differences in the scores of students between groups ( P > 0.05), indicating comparability in statistical terms. Overall, compared to the pure CBL teaching method, VR + CBL demonstrated a significant advantage in the test scores for the anatomy course, although there was no statistical difference in the treatment course test scores (Table ). Specific analysis: In the anatomy course, the groups using VR + CBL (Groups A and B) outperformed those using CBL (Groups C and D), demonstrating a significant effect of VR. In Test 1, Group A achieved superior results compared to Group B, which in turn surpassed Group D, while Group D marginally exceeded Group C in their overall scores. There was no statistically significant difference in scores between Group A and Group B ( P > 0.05), nor between Group C and Group D ( P > 0.05). However, the scores of Group A were significantly higher than those of Groups C and D ( P < 0.05), and the scores of Group B were also significantly higher than those of Groups C and D ( P < 0.05) (Table ). In the rehabilitation therapy course, Group A, which consistently used VR + CBL, achieved the highest scores, while Group D, which always used CBL, scored the lowest. Interestingly, the scores of Group C, which also employed VR + CBL, were lower than those of Group B, which used CBL. The analysis revealed a hierarchical score across the groups, with Group A leading, followed sequentially by Group B, Group C, and Group D, reflecting a gradation in achievement from highest to lowest. The scores of Group A were significantly higher than those of Group D ( P < 0.05), but the differences in scores between the other groups were not statistically significant ( P > 0.05) (Table ). In the final test, the results demonstrated a clear sequence in score rankings, with Group A leading, followed by Group C, Group B, and Group D in descending order. The difference in scores between Group A and Group D was statistically significant ( P < 0.05), while the differences between Group B and Group D, as well as between Group C and Group D, were not statistically significant ( P > 0.05) (Table ). Student questionnaire evaluation After the final test, student questionnaires were distributed and collected, resulting in 82 returned questionnaires and a 100% response rate. Analysis using the Kruskal–Wallis H test revealed significant differences ( P < 0.05) among groups in six key areas: teaching methods, teacher-student interaction, learning resources, skill improvement, course recommendation, and overall satisfaction (Fig. ). On the other hand, there were no significant differences ( P > 0.05) regarding course structure clarity, content practicality, assignments and feedback, and learning burden (Table ). In the theoretical tests, there were no significant differences in the scores of students between groups ( P > 0.05), indicating comparability in statistical terms. Overall, compared to the pure CBL teaching method, VR + CBL demonstrated a significant advantage in the test scores for the anatomy course, although there was no statistical difference in the treatment course test scores (Table ). Specific analysis: In the anatomy course, the groups using VR + CBL (Groups A and B) outperformed those using CBL (Groups C and D), demonstrating a significant effect of VR. In Test 1, Group A achieved superior results compared to Group B, which in turn surpassed Group D, while Group D marginally exceeded Group C in their overall scores. There was no statistically significant difference in scores between Group A and Group B ( P > 0.05), nor between Group C and Group D ( P > 0.05). However, the scores of Group A were significantly higher than those of Groups C and D ( P < 0.05), and the scores of Group B were also significantly higher than those of Groups C and D ( P < 0.05) (Table ). In the rehabilitation therapy course, Group A, which consistently used VR + CBL, achieved the highest scores, while Group D, which always used CBL, scored the lowest. Interestingly, the scores of Group C, which also employed VR + CBL, were lower than those of Group B, which used CBL. The analysis revealed a hierarchical score across the groups, with Group A leading, followed sequentially by Group B, Group C, and Group D, reflecting a gradation in achievement from highest to lowest. The scores of Group A were significantly higher than those of Group D ( P < 0.05), but the differences in scores between the other groups were not statistically significant ( P > 0.05) (Table ). In the final test, the results demonstrated a clear sequence in score rankings, with Group A leading, followed by Group C, Group B, and Group D in descending order. The difference in scores between Group A and Group D was statistically significant ( P < 0.05), while the differences between Group B and Group D, as well as between Group C and Group D, were not statistically significant ( P > 0.05) (Table ). After the final test, student questionnaires were distributed and collected, resulting in 82 returned questionnaires and a 100% response rate. Analysis using the Kruskal–Wallis H test revealed significant differences ( P < 0.05) among groups in six key areas: teaching methods, teacher-student interaction, learning resources, skill improvement, course recommendation, and overall satisfaction (Fig. ). On the other hand, there were no significant differences ( P > 0.05) regarding course structure clarity, content practicality, assignments and feedback, and learning burden (Table ). This study employed a variety of statistical methodologies to investigate the impact of VR + CBL on the learning outcomes of medical students. Statistical analyses revealed that in anatomy tests, Groups A and B, who utilized Virtual Reality technology, significantly outperformed Groups C and D, who did not use VR. This difference underscores the effectiveness of VR technology in enhancing students' mastery of the anatomy of the shoulder joint. It also validates the viability of substituting traditional CBL with VR + CBL in the field of anatomy . However, in the rehabilitation therapy tests, although the VR group scored higher on average, the difference was not statistically significant, indicating that the impact of VR technology varies across different educational domains . In order to deepen our understanding of these differences, we provide detailed descriptions of two VR learning environments. The VR environment for anatomy was specifically designed to simulate the three-dimensional structure of the shoulder joint, enabling students to gain a deeper understanding of various anatomical structures through interactive manipulation. The corresponding anatomy knowledge tests primarily assess students' ability to identify and understand these structures. Existing experiments have found that Virtual Reality is more effective in enhancing medical anatomy learning outcomes than traditional methods . In the field of rehabilitation therapy, the VR environment simulates the rehabilitation process; however, these simulated scenarios may not align precisely enough with the post-test content on rehabilitation therapy knowledge, which affects the significance of the learning outcomes. Another possible explanation is the high complexity of learning in rehabilitation therapy, which requires more practical operation and the accumulation of experience. Current VR technology may not yet fully replicate this complexity. Therefore, although VR technology provides a novel teaching method, its applicability and effectiveness may vary across different disciplines due to the specific requirements and complexities of the course content . It is noteworthy to compare the performances of Group A (VR + CBL) and Group D (traditional CBL) across the anatomy and rehabilitation therapy tests: students in Group A consistently outperformed those in Group D. This suggests that Virtual Reality technology enables students to immerse themselves in an interactive artificial world, providing a more intuitive learning experience and deepening their understanding of complex shoulder joint structures and treatment methods . These findings demonstrate that the educational model VR + CBL surpasses traditional simulation practices in educational outcomes, highlighting the potential to enhance learning effectiveness . The comparison between Group B (using Virtual Reality only in the anatomy course) and Group C (using Virtual Reality only in the rehabilitation therapy course) revealed varying learning outcomes, reflecting the suitability and effectiveness of Virtual Reality technology across different educational contents . Specifically, the application of Virtual Reality technology in anatomy learning may be more effective than in rehabilitation therapy learning . This difference not only underscores the necessity of flexibly applying and developing Virtual Reality technology in medical education but also highlights the importance of anatomical knowledge as a foundational basis for learning in rehabilitation therapy . Further analysis indicates that the group integrating Virtual Reality technology comprehensively (Group A) consistently achieved higher scores across all tests than other groups. These findings confirm the significant positive impact of the comprehensive application of Virtual Reality technology on the overall learning outcomes of students . Statistical results show that VR + CBL significantly enhances students' acquisition of knowledge and skills in shoulder joint anatomy and rehabilitation therapy, emphasizing the potential value of Virtual Reality technology in providing immersive and interactive learning experiences in medical education . The study by Fink et al. has also demonstrated the efficacy of integrating teaching support into case-based learning, and that such support can significantly enhance learning success . In addition, VR might also be useful to include support for anatomy curricula that lack resources like donor-based dissection. The results of the questionnaire survey corroborated the findings of the statistical analysis. In terms of teaching methods, the groups using Virtual Reality technology (Groups A and B) received higher evaluations than the traditional Case-Based Learning group (Group D) and the partially VR-integrated group (Group C). This suggests that students perceive the integration of Virtual Reality technology as significantly enhancing the effectiveness of teaching methods, particularly in fostering intuitive understanding and practical operation . This finding is consistent with the improvements in learning outcomes and further validates the role of Virtual Reality technology in enhancing the quality of medical education . Studies have shown that, compared to traditional methods, VR technology in education has increased test scores, satisfaction, and enjoyment, although it has not significantly reduced the time taken to complete tests . In terms of teacher-student interaction and learning resources, students using VR technology reported higher satisfaction. This may be because the VR environment offers more opportunities for interaction between teachers and students, such as direct guidance and feedback in virtual scenarios, while also increasing student participation through virtual case discussions . Additionally, VR technology enriches learning resources by simulating complex medical scenarios in the real world, reducing resource consumption and thereby improving teaching effectiveness . Feedback on skill improvement indicated that the VR + CBL teaching method not only strengthened the learning of theoretical knowledge but, more importantly, enhanced students' practical operational abilities . This is particularly evident in physical therapy and clinical decision-making training, where students can repeatedly practice in a safe virtual environment, thereby deepening their understanding and application of course content . Furthermore, the high course recommendation rate reflects students' recognition and satisfaction with the VR + CBL method, highlighting its potential impact and applicability in future medical education . Students expressed high overall satisfaction with the VR + CBL method, further emphasising its popularity as a teaching approach. They universally believe that this innovative method enhances enjoyment, interactivity, and effectiveness in learning through its realistic virtual environments, marking it as a promising educational strategy . Studies by Fink (2023) and others also confirm that VR education exhibits higher system usability and satisfaction . This positive feedback supports the further development and application of Virtual Reality technology in medical education, particularly in skill-intensive and highly specialised medical fields . The survey results provide valuable insights, indicating that the introduction of VR technology not only improves students' learning outcomes and skills but also significantly enhances student satisfaction and participation . Future research should further explore effective integration of VR technology with case-based learning methods, customising them according to different medical specialties and course content to enhance students' immersion and participation, thereby maximising teaching effectiveness and student satisfaction . Additionally, research should consider the impact of introducing VR technology on the roles of teachers, teaching methods, and students' self-learning abilities . Despite providing valuable insights into the application of Virtual Reality technology in medical education, this study faces several major limitations that might affect the universality and interpretation of the findings . The sample size of the study was relatively small, which could limit the generalisability of our conclusions. Although an efficacy analysis was conducted to ensure that the sample size was sufficient to detect significant differences between teaching methods, a smaller sample may not fully represent the diverse population of medical students. Moreover, the diversity of the sample—such as educational background, grade level, and prior VR experience—was not thoroughly explored, which are factors that could influence the effectiveness of VR learning. The scope of application for Virtual Reality technology was limited in this study. While we explored the use of VR in anatomy and rehabilitation therapy education, it did not extend to other medical fields, such as surgical procedures or clinical decision-making training, where the potential benefits of VR might differ. Expanding the scope of VR application could reveal a broader educational impact. This study primarily focused on short-term learning outcomes, without assessing the long-term retention of knowledge and development of clinical skills by students. The impact of Virtual Reality technology on long-term memory retention and skill preservation is a crucial aspect of evaluating its true value in education. In conclusion, to overcome these limitations and deepen our understanding of the innovative applications of Virtual Reality technology in medical education, future research should expand the sample size, increase sample diversity, explore the use of VR technology across a broader range of medical fields, and conduct longitudinal studies. These measures will facilitate a more accurate assessment of the educational benefits of Virtual Reality technology and its long-term impact on the development of students' clinical skills . This study underscores the effectiveness of integrating VR with CBL in medical education, highlighting the achievements in student engagement, knowledge retention, and skill development, especially in the treatment of the shoulder joint. Despite the limited scope of the study and the small sample size, the results advocate for a broader application of VR technology in medical training subjects. The study suggests that by incorporating virtual reality into medical courses, it is possible to closely combine educational experience with the complexity of clinical practice, better preparing future medical professionals. This represents a significant advancement in the evolution of medical education methods. |
Evaluating smartphone-based 3D imaging techniques for clinical application in oral and maxillofacial surgery: A comparative study with the vectra M5 | b6a2d07d-c716-4915-b8ab-8e704716252f | 11723895 | Dentistry[mh] | Three-dimensional (3D) surface imaging is widely employed in oral and maxillofacial surgery (OMFS), in which precise assessments of anatomically complex structures and subtle volumetric changes are critical . The technology is utilized in numerous clinical contexts, improving patient care and communication between patients and clinicians in pre- and postoperative settings . Therefore, 3D surface imaging has become a leading technology, gradually replacing conventional photography in surgical planning and outcome evaluations . Recently, smartphone-based approaches for 3D surface imaging have been introduced . Despite these technological advancements, smartphone-based 3D surface imaging is still not sufficiently integrated into standard procedures in OMFS. Few studies have evaluated the capability of smartphones to capture anatomically complex facial regions. Interestingly, some studies highlight the potential of smartphone-based methodologies for capturing facial features with cost-effectiveness and portability, while other studies report limited clinical applicability. D’Ettorre et al. evaluated facial surface models (SMs) of 40 individuals, utilizing three different systems: the 3dMDtrio stereophotogrammetry system ( 3dMD Inc., USA ), the iPhone XS with the TrueDepth-based Bellus3D Face application ( Bellus3D Inc., USA ), and the iPhone XS with the application Capture ( Standard Cyborg Inc., USA ). The research documented the duration of image acquisition and processing, and also gauged the surface-to-surface deviation and distance between 18 landmarks on the reference images from 3dMD and those obtained with Bellus3D or Capture . The authors concluded that the use of smartphone applications in conjunction with the TrueDepth camera demonstrates promising results. According to the authors, the primary benefits lie in cost-effectiveness and portability. Andrews et al. compared SMs of the face captured with the 3dMDface system and the iPhone 11 Pro TrueDepth camera combined with the Bellus3D Face application. They found that 97% of landmarks were within 2 mm of error compared to the reference data. The authors reported an overall root mean square (RMS) difference between the iPhone 11 Pro and 3dMD system of 0.86 mm ± 0.31 mm. High intra-observer and inter-observer reliabilities were reported . Seifert et al. performed a study involving 15 patients to compare the accuracy of three 3D facial scanning applications for the iPhone 14Pro (EM3D (Brawny Lads SoftwareUSA) , Polycam (Polycam Inc., USA), and ScandyPro (Scandy LLC., USA) ) with a stationary photogrammetry system (3dMD). They found that the smartphone applications demonstrated mean surface deviations of 1.46 mm for EM3D, 1.66 mm for Polycam, and 1.61 mm for ScandyPro. A mean landmark-to-landmark deviation of 1.27 mm for Polycam, 1.26 mm for ScandyPro, and 1.45 mm for EM3D was observed. The authors concluded that smartphone-based systems offer a cost-effective and portable alternative to stationary systems, particularly in resource-limited settings . Nightingale et al. compared facial SMs of 20 participants acquired using the Apple iPhone 8S ( Apple Inc., USA ) in conjunction with the Camera + 2 iOS application ( tap tap tap LLC., USA ) and the Artec Spider (Artec Group, Luxembourg) structured light scanner. They found an accuracy of 1.3 mm ± 0.3 mm between Artec-based and smartphone-based SMs. They concluded that smartphone-based photogrammetry is a reliable, low-cost alternative for clinical 3D facial imaging . In contrast, Thurzo et al. observed differences between SMs exceeding 3 mm when comparing SMs generated by the TrueDepth camera utilizing the Bellus3D Dental Pro application and SMs generated by cone beam computed tomography . They concluded that smartphone-based 3D surface imaging has limited clinical relevance. Nevertheless, they recommended that employing smartphone-based 3D surface imaging for facial assessments, especially under circumstances where precision below 3 mm is not imperative, could still yield value. While the studies present a thorough approach, it is crucial to note that due to technological advancement, the applicability of smartphone-based 3D surface imaging may have improved. Additionally, detailed volumetric assessments of the face were performed by few prior investigations. To address this obstacle, this study was aimed at clarifying the applicability of smartphone-based 3D surface imaging for clinical use in OMFS by comparing two smartphone-based approaches with the established gold standard, the Vectra M5 system ( Canfield Scientific, USA ). SMs generated by the two approaches were subsequently compared based on their alignment with the Vectra M5 system employing landmark-to-landmark distance analyses and volumetric assessments. This comparative analysis aims to provide insights into the potential clinical use of smartphone-based 3D surface imaging and contributes to a broader understanding of its accuracy in comparison to established technologies in the field of OMFS. Study protocol This prospective monocentric study was conducted at the Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Germany, following approval from the local ethics committee (23–3400-101). The investigation involved 30 healthy adult students enrolled at the University of Regensburg, excluding individuals with recent craniofacial surgery, maxillofacial trauma, or significant skeletal deformities. Participant preparation Consistent with prior studies on 3D surface imaging, participants were positioned in a standardized posture under controlled lighting conditions. After receiving an explanation of the procedure, they were seated on a stool and instructed to maintain a neutral facial expression while keeping their heads in a natural position. Participants were also directed to wear a hairband and to remove any makeup. Following the protocol by Othman et al., 15 specific landmarks were identified on each participant’s face using a white eyeliner . Figure provides an overview of all landmarks. 3D data acquisition The study design included obtaining 3D SMs of each participant’s face using three different methods: stereophotogrammetry with the Vectra M5, the smartphone application “3D-Scanner App” V2.1.2 ( Laan Consulting Corp., USA ) utilizing the TrueDepth camera of the iPhone 14 Pro ( Apple Inc., USA ), and the light detection and ranging (LiDAR) camera of an iPhone 14 Pro in conjunction with photogrammetry. The smartphone application was selected, based on its capability to offer both a “TrueDepth-Mode” and a “Photo-Mode”. While the TrueDepth camera employs vertical-cavity surface-emitting laser (VCSEL) technology to directly generate a metric point cloud, which is later used for SM generation, in the photogrammetry setup, the iPhone’s LiDAR sensor is utilized to ensure a metrical 3D reconstruction. LiDAR employs time-of-flight measurements to ascertain the distance (i.e., depth) between an object and the sensor . The stereophotogrammetry-based Vectra M5, renowned for its high accuracy and widely employed in 3D facial imaging, was utilized as a reference in the study . Organizing multiple photographs into stereo pairs and integrating their overlapping regions to create a 3D SM, stereophotogrammetry is considered the gold standard for 3D surface imaging . The Vectra M5 uses the Vectra Analysis Module (VAM) for SM analysis . All scans were performed in a designated 3D scanning room designed for children with craniofacial deformities and orthognathic surgery patients. A comparative visualization of 3D imaging techniques is shown in Fig. . 3D Data processing 3D data obtained from smartphone-based methods and the Vectra M5 were exported as Wavefront OBJ files. CloudCompare's ( http://cloudcompare.org/ ) ICP implementation was employed for rough alignment, which also included the estimation of an isotropic scaling factor. Facial areas of interest (FAOI) were extracted from both the smartphone-based SMs and Vectra-based SMs, which entailed cutting the SMs at the visible face edges. These extracted FAOIs were used for alignment, ensuring that non-facial regions and noise were excluded from the alignment process. This approach minimized the potential for errors arising from non-facial regions. Subsequently, the superimposed FAOI from the smartphone-based approaches were imported into the VAM. The VAM was utilized for precise alignment, which entailed aligning the smartphone-based FAOI with the SMs generated by the Vectra M5. Figure juxtaposes a SM generated by the Vectra M5 to the smartphone-based SMs. Figure provides an example of the superimposed SMs used for analysis. A flowchart summarizing the study’s methodology is presented in Fig. . SM-comparison For SM comparison, the software VAM was utilized. For each participant, the study compared the TrueDepth-camera-based “3D-Scanner App” SMs with Vectra M5-based SMs and the photogrammetry-based “3D-Scanner App” SMs with Vectra M5-based SMs. FAOI derived from smartphone-based SMs were compared with Vectra-based SMs, using landmark-to-landmark distance analyses and volumetric analyses. Landmark-to-landmark distance analyses involved assessing the surface-to-surface deviation by measuring 15 distinct landmark-to-landmark distances between the superimposed SMs. The landmarks’ midpoints were selected manually using the VAM. Subsequently, the linear distances between the superimposed models were calculated. Volumetric analyses were conducted by generating a difference model between the superimposed SMs. This process entailed analyzing the differences in volume between the upper face, mid-face, and lower face. The upper face was defined as the area from the upper hairline to an axial plane through the nasion. The mid-face was characterized as between an axial plane through the nasion and the subnasale. The lower face was defined as between an axial plane through the subnasale and the anatomical boundaries of the lower jaw. Areas were selected manually using the VAM. Subsequently, the volumetric differences between the superimposed models were calculated for each region. To assess inter-observer reliability, a second observer independently scaled and aligned all SMs, selected all landmarks manually, and performed all volumetric and landmark-to-landmark measurements. Figure provides an example of a landmark-to-landmark-distance measurement. Figure shows volumetric measurements of the upper-, mid-, and lower face. Table provides an overview of all measurements. Statistical analysis IBM SPSS 29 ( SPSS Inc., USA ) was used for statistical analysis. A Shapiro–Wilk test indicated that normality could not be assumed for measurements (1) to (20). When assessing the accuracy of SMs obtained from TrueDepth- and photogrammetry-based SMs in comparison to Vectra M5-based SMs, values were considered clinically acceptable if mean values did not surpass 2 mm for landmark-to-landmark distances and 2 cc for volumetric differences. This threshold was chosen in accordance with the criteria outlined by Aung et al., who characterized measurements surpassing > 2 units from reference data as unreliable . A Wilcoxon signed-rank test for paired samples was employed to compare the central tendencies between the methods. The consistency between surface-to-surface and volumetric deviation was evaluated using Bland–Altman analyses. A 95% limit of agreement (LoA) of ≤ 2 mm between TrueDepth- and photogrammetry-based SMs was defined as clinically acceptable for landmark-to-landmark distances to compare TrueDepth- and photogrammetry-based SMs based on their alignment with the Vectra M5. For volumetric deviations, a 95% Bland–Altman LoA of ≤ 2 cc was defined as clinically acceptable to compare TrueDepth- and photogrammetry-based SMs based on their alignment with the Vectra M5. Inter-observer reliability was evaluated using the Intraclass Correlation Coefficient (ICC), the Wilcoxon signed-rank test for paired samples, and Bland–Altman analyses. The ICC was evaluated according to Cicchetti et al. using the following guidelines for interpretation: less than 0.40 – poor, between 0.40 and 0.59 – fair, between 0.60 and 0.74 – good, and between 0.75 and 1.00 – excellent . A 95% Bland–Altman LoA of ≤ 2 cc was considered clinically acceptable for evaluating inter-observer reliability. This prospective monocentric study was conducted at the Department of Oral and Maxillofacial Surgery, University Hospital Regensburg, Germany, following approval from the local ethics committee (23–3400-101). The investigation involved 30 healthy adult students enrolled at the University of Regensburg, excluding individuals with recent craniofacial surgery, maxillofacial trauma, or significant skeletal deformities. Consistent with prior studies on 3D surface imaging, participants were positioned in a standardized posture under controlled lighting conditions. After receiving an explanation of the procedure, they were seated on a stool and instructed to maintain a neutral facial expression while keeping their heads in a natural position. Participants were also directed to wear a hairband and to remove any makeup. Following the protocol by Othman et al., 15 specific landmarks were identified on each participant’s face using a white eyeliner . Figure provides an overview of all landmarks. The study design included obtaining 3D SMs of each participant’s face using three different methods: stereophotogrammetry with the Vectra M5, the smartphone application “3D-Scanner App” V2.1.2 ( Laan Consulting Corp., USA ) utilizing the TrueDepth camera of the iPhone 14 Pro ( Apple Inc., USA ), and the light detection and ranging (LiDAR) camera of an iPhone 14 Pro in conjunction with photogrammetry. The smartphone application was selected, based on its capability to offer both a “TrueDepth-Mode” and a “Photo-Mode”. While the TrueDepth camera employs vertical-cavity surface-emitting laser (VCSEL) technology to directly generate a metric point cloud, which is later used for SM generation, in the photogrammetry setup, the iPhone’s LiDAR sensor is utilized to ensure a metrical 3D reconstruction. LiDAR employs time-of-flight measurements to ascertain the distance (i.e., depth) between an object and the sensor . The stereophotogrammetry-based Vectra M5, renowned for its high accuracy and widely employed in 3D facial imaging, was utilized as a reference in the study . Organizing multiple photographs into stereo pairs and integrating their overlapping regions to create a 3D SM, stereophotogrammetry is considered the gold standard for 3D surface imaging . The Vectra M5 uses the Vectra Analysis Module (VAM) for SM analysis . All scans were performed in a designated 3D scanning room designed for children with craniofacial deformities and orthognathic surgery patients. A comparative visualization of 3D imaging techniques is shown in Fig. . 3D data obtained from smartphone-based methods and the Vectra M5 were exported as Wavefront OBJ files. CloudCompare's ( http://cloudcompare.org/ ) ICP implementation was employed for rough alignment, which also included the estimation of an isotropic scaling factor. Facial areas of interest (FAOI) were extracted from both the smartphone-based SMs and Vectra-based SMs, which entailed cutting the SMs at the visible face edges. These extracted FAOIs were used for alignment, ensuring that non-facial regions and noise were excluded from the alignment process. This approach minimized the potential for errors arising from non-facial regions. Subsequently, the superimposed FAOI from the smartphone-based approaches were imported into the VAM. The VAM was utilized for precise alignment, which entailed aligning the smartphone-based FAOI with the SMs generated by the Vectra M5. Figure juxtaposes a SM generated by the Vectra M5 to the smartphone-based SMs. Figure provides an example of the superimposed SMs used for analysis. A flowchart summarizing the study’s methodology is presented in Fig. . For SM comparison, the software VAM was utilized. For each participant, the study compared the TrueDepth-camera-based “3D-Scanner App” SMs with Vectra M5-based SMs and the photogrammetry-based “3D-Scanner App” SMs with Vectra M5-based SMs. FAOI derived from smartphone-based SMs were compared with Vectra-based SMs, using landmark-to-landmark distance analyses and volumetric analyses. Landmark-to-landmark distance analyses involved assessing the surface-to-surface deviation by measuring 15 distinct landmark-to-landmark distances between the superimposed SMs. The landmarks’ midpoints were selected manually using the VAM. Subsequently, the linear distances between the superimposed models were calculated. Volumetric analyses were conducted by generating a difference model between the superimposed SMs. This process entailed analyzing the differences in volume between the upper face, mid-face, and lower face. The upper face was defined as the area from the upper hairline to an axial plane through the nasion. The mid-face was characterized as between an axial plane through the nasion and the subnasale. The lower face was defined as between an axial plane through the subnasale and the anatomical boundaries of the lower jaw. Areas were selected manually using the VAM. Subsequently, the volumetric differences between the superimposed models were calculated for each region. To assess inter-observer reliability, a second observer independently scaled and aligned all SMs, selected all landmarks manually, and performed all volumetric and landmark-to-landmark measurements. Figure provides an example of a landmark-to-landmark-distance measurement. Figure shows volumetric measurements of the upper-, mid-, and lower face. Table provides an overview of all measurements. IBM SPSS 29 ( SPSS Inc., USA ) was used for statistical analysis. A Shapiro–Wilk test indicated that normality could not be assumed for measurements (1) to (20). When assessing the accuracy of SMs obtained from TrueDepth- and photogrammetry-based SMs in comparison to Vectra M5-based SMs, values were considered clinically acceptable if mean values did not surpass 2 mm for landmark-to-landmark distances and 2 cc for volumetric differences. This threshold was chosen in accordance with the criteria outlined by Aung et al., who characterized measurements surpassing > 2 units from reference data as unreliable . A Wilcoxon signed-rank test for paired samples was employed to compare the central tendencies between the methods. The consistency between surface-to-surface and volumetric deviation was evaluated using Bland–Altman analyses. A 95% limit of agreement (LoA) of ≤ 2 mm between TrueDepth- and photogrammetry-based SMs was defined as clinically acceptable for landmark-to-landmark distances to compare TrueDepth- and photogrammetry-based SMs based on their alignment with the Vectra M5. For volumetric deviations, a 95% Bland–Altman LoA of ≤ 2 cc was defined as clinically acceptable to compare TrueDepth- and photogrammetry-based SMs based on their alignment with the Vectra M5. Inter-observer reliability was evaluated using the Intraclass Correlation Coefficient (ICC), the Wilcoxon signed-rank test for paired samples, and Bland–Altman analyses. The ICC was evaluated according to Cicchetti et al. using the following guidelines for interpretation: less than 0.40 – poor, between 0.40 and 0.59 – fair, between 0.60 and 0.74 – good, and between 0.75 and 1.00 – excellent . A 95% Bland–Altman LoA of ≤ 2 cc was considered clinically acceptable for evaluating inter-observer reliability. Patient demographics The cohort included 15 men and 15 women. Their mean age was M = 24 years (SD = ± 2.3), mean height M = 176 cm (SD = ± 8 cm), mean weight M = 69.6 kg (SD = ± 14.0 kg), and mean BMI M = 22.5 (SD = ± 3.6). Landmark-to-Landmark Distance Analyses Comparison of vectra M5- and smartphone-based SMs Table presents the outcomes of the landmark-to-landmark distance analyses. The mean value for all landmark-to-landmark distances (16) of photogrammetry-based SMs to Vectra-based SMs was calculated at M = 0.8 mm (SD = ± 0.58 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.32 mm (SD = ± 1.02 mm, n = 30; Table ). The mean value for all landmark-to-landmark distances (16) between TrueDepth-based SMs and Vectra-based SMs was calculated at M = 1.1 mm (SD = ± 0.72 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.5 mm (SD = ± 0.95 mm, n = 30; Table ). All landmark-to-landmark measurements (1) – (16) remained within a clinically acceptable range, exhibiting an overall landmark-to-landmark deviation of ≤ 2 mm, when comparing both TrueDepth- and photogrammetry-based SMs with Vectra-based SMs (Table ). Comparison of truedepth- and photogrammetry-based SMs Table presents the outcomes of landmark-to-landmark distance analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. Seven out of 16 measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. However, when contrasting the mean landmark-to-landmark deviation across all distances (16) of TrueDepth- and photogrammetry-based SMs, based on their alignment with Vectra-based SMs, the results indicate a clinically acceptable 95% Bland–Altman LoA of 1.35 mm to −2.0 mm (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all landmark-to-landmark distances (16) of photogrammetry-based measurements (median = 0.66 mm) was significantly lower than for TrueDepth-based measurements (median = 0.98 mm; Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 450; Table ). Figure shows the Bland–Altman plots for the landmark-to-landmark measurements (1) – (16). Volumetric analyses Comparison of vectra M5- and smartphone-based SMs Table presents the outcomes of the volumetric deviation analyses. The mean volumetric difference across all volumetric measurements (20) comparing photogrammetry-based SMs to Vectra-based SMs was calculated at M = 1.8 cc (SD = ± 2.12 cc, n = 90). The highest deviation occurred in measurement (18) (mid-face) with M = 2.16 cc (SD = ± 2.34 cc, n = 30; Table ). All photogrammetry-based volumetric differences except measurement (18) (midface) remained within a clinically acceptable range, exhibiting a volumetric difference of ≤ 2 cc, when comparing photogrammetry-based SMs with Vectra-based SMs. The mean volumetric difference across all volumetric measurements (20) for TrueDepth-based SMs compared to Vectra-based SMs was calculated at M = 3.1 cc (SD = ± 2.64 cc, n = 90). The highest deviation was observed in measurement (18) (mid-face) with M = 4.7 cc (SD = ± 2.86 cc, n = 30; Table ). TrueDepth-based volumetric differences exceeded the clinically acceptable range for the overall accuracy (20), the upper- (17) and mid-face (18), exhibiting an average volumetric deviation of > 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs. However, values for the lower face (19) remained within the clinically acceptable volumetric difference of ≤ 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs (Table ). Comparison of TrueDepth- and Photogrammetry-based SMs Table presents the outcomes of volumetric deviation analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. All volumetric measurements exceeded the ≤ 2 cc 95% Bland–Altman LoA, with the highest deviation identified in the mid-face, ranging from 4.73 cc to −9.81 cc (Table ). The Wilcoxon signed-rank test for paired samples revealed a significant difference in volumetric distances in the upper face and mid-face between the two approaches (Table ). When contrasting the volumetric differences across all regions (20) of TrueDepth- and photogrammetry-based SMs based on their alignment with Vectra-based SMs, the results indicated a clinically unacceptable 95% Bland–Altman LoA of 4.9 cc to −7.6 cc (> 2 cc) (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) of photogrammetry-based measurements (median = 1.14 cc) was significantly lower than for TrueDepth-based measurements (median = 2.12 cc) (Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 90; Table ). Figure shows the Bland–Altman plots for the volumetric distances (17) – (20). Inter-Observer Reliability Photogrammetry-based measurements Table presents the inter-observer reliability of photogrammetry-based measurements. All photogrammetry-based landmark-to-landmark measurements demonstrated good to excellent correlation, with ICC values ranging from 0.70 to 0.97. Landmark-to-landmark measurements showed a clinically acceptable 95% Bland Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (16) (Table ). Volumetric assessments conducted by the two observers exhibited excellent correlation, with ICC values ranging from 0.96 to 0.97. All photogrammetry-based volumetric measurements, except for measurement (18) (midface), displayed a 95% Bland Altman LoA of ≤ 2 cc. However, the Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) differed significantly between the two observers (Wilcoxon signed-rank test for paired samples; p = 0.007, n = 90; Table ). Figure presents the Bland–Altman plots illustrating the inter-observer reliability of photogrammetry-based measurements. TrueDepth-based measurements Table presents the inter-observer reliability of TrueDepth-based measurements. The majority of landmark-to-landmark measurements ((1) – (8) and (10) – (16)) demonstrated good to excellent correlation, with ICC values ranging from 0.64 to 0.97. Measurement (9) showed fair correlation between the two observers. All landmark-to-landmark measurements displayed clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (15). However, the Wilcoxon signed-rank test indicated a statistically significant difference for the deviation across all landmark-to-landmark distances (16) (Wilcoxon signed-rank test for paired samples; p < 0.001, n = 90; Table ). For volumetric assessments conducted by the two observers, excellent correlation was observed for measurements (17) (upper face), (18) (midface), and (20) (overall volume). The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for all volumetric measurements (Table ). However, all TrueDepth-based volumetric measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 cc between the two observers (Table ). Figure displays the Bland–Altman plots for inter-observer reliability of TrueDepth-based measurements. The cohort included 15 men and 15 women. Their mean age was M = 24 years (SD = ± 2.3), mean height M = 176 cm (SD = ± 8 cm), mean weight M = 69.6 kg (SD = ± 14.0 kg), and mean BMI M = 22.5 (SD = ± 3.6). Comparison of vectra M5- and smartphone-based SMs Table presents the outcomes of the landmark-to-landmark distance analyses. The mean value for all landmark-to-landmark distances (16) of photogrammetry-based SMs to Vectra-based SMs was calculated at M = 0.8 mm (SD = ± 0.58 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.32 mm (SD = ± 1.02 mm, n = 30; Table ). The mean value for all landmark-to-landmark distances (16) between TrueDepth-based SMs and Vectra-based SMs was calculated at M = 1.1 mm (SD = ± 0.72 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.5 mm (SD = ± 0.95 mm, n = 30; Table ). All landmark-to-landmark measurements (1) – (16) remained within a clinically acceptable range, exhibiting an overall landmark-to-landmark deviation of ≤ 2 mm, when comparing both TrueDepth- and photogrammetry-based SMs with Vectra-based SMs (Table ). Comparison of truedepth- and photogrammetry-based SMs Table presents the outcomes of landmark-to-landmark distance analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. Seven out of 16 measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. However, when contrasting the mean landmark-to-landmark deviation across all distances (16) of TrueDepth- and photogrammetry-based SMs, based on their alignment with Vectra-based SMs, the results indicate a clinically acceptable 95% Bland–Altman LoA of 1.35 mm to −2.0 mm (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all landmark-to-landmark distances (16) of photogrammetry-based measurements (median = 0.66 mm) was significantly lower than for TrueDepth-based measurements (median = 0.98 mm; Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 450; Table ). Figure shows the Bland–Altman plots for the landmark-to-landmark measurements (1) – (16). Table presents the outcomes of the landmark-to-landmark distance analyses. The mean value for all landmark-to-landmark distances (16) of photogrammetry-based SMs to Vectra-based SMs was calculated at M = 0.8 mm (SD = ± 0.58 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.32 mm (SD = ± 1.02 mm, n = 30; Table ). The mean value for all landmark-to-landmark distances (16) between TrueDepth-based SMs and Vectra-based SMs was calculated at M = 1.1 mm (SD = ± 0.72 mm, n = 450; Table ). The highest deviation was found in measurement (14) (left cheilion to left cheilion) M = 1.5 mm (SD = ± 0.95 mm, n = 30; Table ). All landmark-to-landmark measurements (1) – (16) remained within a clinically acceptable range, exhibiting an overall landmark-to-landmark deviation of ≤ 2 mm, when comparing both TrueDepth- and photogrammetry-based SMs with Vectra-based SMs (Table ). Table presents the outcomes of landmark-to-landmark distance analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. Seven out of 16 measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. However, when contrasting the mean landmark-to-landmark deviation across all distances (16) of TrueDepth- and photogrammetry-based SMs, based on their alignment with Vectra-based SMs, the results indicate a clinically acceptable 95% Bland–Altman LoA of 1.35 mm to −2.0 mm (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all landmark-to-landmark distances (16) of photogrammetry-based measurements (median = 0.66 mm) was significantly lower than for TrueDepth-based measurements (median = 0.98 mm; Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 450; Table ). Figure shows the Bland–Altman plots for the landmark-to-landmark measurements (1) – (16). Comparison of vectra M5- and smartphone-based SMs Table presents the outcomes of the volumetric deviation analyses. The mean volumetric difference across all volumetric measurements (20) comparing photogrammetry-based SMs to Vectra-based SMs was calculated at M = 1.8 cc (SD = ± 2.12 cc, n = 90). The highest deviation occurred in measurement (18) (mid-face) with M = 2.16 cc (SD = ± 2.34 cc, n = 30; Table ). All photogrammetry-based volumetric differences except measurement (18) (midface) remained within a clinically acceptable range, exhibiting a volumetric difference of ≤ 2 cc, when comparing photogrammetry-based SMs with Vectra-based SMs. The mean volumetric difference across all volumetric measurements (20) for TrueDepth-based SMs compared to Vectra-based SMs was calculated at M = 3.1 cc (SD = ± 2.64 cc, n = 90). The highest deviation was observed in measurement (18) (mid-face) with M = 4.7 cc (SD = ± 2.86 cc, n = 30; Table ). TrueDepth-based volumetric differences exceeded the clinically acceptable range for the overall accuracy (20), the upper- (17) and mid-face (18), exhibiting an average volumetric deviation of > 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs. However, values for the lower face (19) remained within the clinically acceptable volumetric difference of ≤ 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs (Table ). Comparison of TrueDepth- and Photogrammetry-based SMs Table presents the outcomes of volumetric deviation analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. All volumetric measurements exceeded the ≤ 2 cc 95% Bland–Altman LoA, with the highest deviation identified in the mid-face, ranging from 4.73 cc to −9.81 cc (Table ). The Wilcoxon signed-rank test for paired samples revealed a significant difference in volumetric distances in the upper face and mid-face between the two approaches (Table ). When contrasting the volumetric differences across all regions (20) of TrueDepth- and photogrammetry-based SMs based on their alignment with Vectra-based SMs, the results indicated a clinically unacceptable 95% Bland–Altman LoA of 4.9 cc to −7.6 cc (> 2 cc) (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) of photogrammetry-based measurements (median = 1.14 cc) was significantly lower than for TrueDepth-based measurements (median = 2.12 cc) (Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 90; Table ). Figure shows the Bland–Altman plots for the volumetric distances (17) – (20). Table presents the outcomes of the volumetric deviation analyses. The mean volumetric difference across all volumetric measurements (20) comparing photogrammetry-based SMs to Vectra-based SMs was calculated at M = 1.8 cc (SD = ± 2.12 cc, n = 90). The highest deviation occurred in measurement (18) (mid-face) with M = 2.16 cc (SD = ± 2.34 cc, n = 30; Table ). All photogrammetry-based volumetric differences except measurement (18) (midface) remained within a clinically acceptable range, exhibiting a volumetric difference of ≤ 2 cc, when comparing photogrammetry-based SMs with Vectra-based SMs. The mean volumetric difference across all volumetric measurements (20) for TrueDepth-based SMs compared to Vectra-based SMs was calculated at M = 3.1 cc (SD = ± 2.64 cc, n = 90). The highest deviation was observed in measurement (18) (mid-face) with M = 4.7 cc (SD = ± 2.86 cc, n = 30; Table ). TrueDepth-based volumetric differences exceeded the clinically acceptable range for the overall accuracy (20), the upper- (17) and mid-face (18), exhibiting an average volumetric deviation of > 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs. However, values for the lower face (19) remained within the clinically acceptable volumetric difference of ≤ 2 cc, when comparing TrueDepth-based SMs with Vectra-based SMs (Table ). Table presents the outcomes of volumetric deviation analyses, when comparing TrueDepth- with photogrammetry-based SMs based on their alignment with Vectra-based SMs. All volumetric measurements exceeded the ≤ 2 cc 95% Bland–Altman LoA, with the highest deviation identified in the mid-face, ranging from 4.73 cc to −9.81 cc (Table ). The Wilcoxon signed-rank test for paired samples revealed a significant difference in volumetric distances in the upper face and mid-face between the two approaches (Table ). When contrasting the volumetric differences across all regions (20) of TrueDepth- and photogrammetry-based SMs based on their alignment with Vectra-based SMs, the results indicated a clinically unacceptable 95% Bland–Altman LoA of 4.9 cc to −7.6 cc (> 2 cc) (Table ). The Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) of photogrammetry-based measurements (median = 1.14 cc) was significantly lower than for TrueDepth-based measurements (median = 2.12 cc) (Wilcoxon signed-rank test for paired samples; p = < 0.001, n = 90; Table ). Figure shows the Bland–Altman plots for the volumetric distances (17) – (20). Photogrammetry-based measurements Table presents the inter-observer reliability of photogrammetry-based measurements. All photogrammetry-based landmark-to-landmark measurements demonstrated good to excellent correlation, with ICC values ranging from 0.70 to 0.97. Landmark-to-landmark measurements showed a clinically acceptable 95% Bland Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (16) (Table ). Volumetric assessments conducted by the two observers exhibited excellent correlation, with ICC values ranging from 0.96 to 0.97. All photogrammetry-based volumetric measurements, except for measurement (18) (midface), displayed a 95% Bland Altman LoA of ≤ 2 cc. However, the Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) differed significantly between the two observers (Wilcoxon signed-rank test for paired samples; p = 0.007, n = 90; Table ). Figure presents the Bland–Altman plots illustrating the inter-observer reliability of photogrammetry-based measurements. TrueDepth-based measurements Table presents the inter-observer reliability of TrueDepth-based measurements. The majority of landmark-to-landmark measurements ((1) – (8) and (10) – (16)) demonstrated good to excellent correlation, with ICC values ranging from 0.64 to 0.97. Measurement (9) showed fair correlation between the two observers. All landmark-to-landmark measurements displayed clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (15). However, the Wilcoxon signed-rank test indicated a statistically significant difference for the deviation across all landmark-to-landmark distances (16) (Wilcoxon signed-rank test for paired samples; p < 0.001, n = 90; Table ). For volumetric assessments conducted by the two observers, excellent correlation was observed for measurements (17) (upper face), (18) (midface), and (20) (overall volume). The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for all volumetric measurements (Table ). However, all TrueDepth-based volumetric measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 cc between the two observers (Table ). Figure displays the Bland–Altman plots for inter-observer reliability of TrueDepth-based measurements. Table presents the inter-observer reliability of photogrammetry-based measurements. All photogrammetry-based landmark-to-landmark measurements demonstrated good to excellent correlation, with ICC values ranging from 0.70 to 0.97. Landmark-to-landmark measurements showed a clinically acceptable 95% Bland Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (16) (Table ). Volumetric assessments conducted by the two observers exhibited excellent correlation, with ICC values ranging from 0.96 to 0.97. All photogrammetry-based volumetric measurements, except for measurement (18) (midface), displayed a 95% Bland Altman LoA of ≤ 2 cc. However, the Wilcoxon signed-rank test for paired samples indicated that the deviation across all volumetric distances (20) differed significantly between the two observers (Wilcoxon signed-rank test for paired samples; p = 0.007, n = 90; Table ). Figure presents the Bland–Altman plots illustrating the inter-observer reliability of photogrammetry-based measurements. Table presents the inter-observer reliability of TrueDepth-based measurements. The majority of landmark-to-landmark measurements ((1) – (8) and (10) – (16)) demonstrated good to excellent correlation, with ICC values ranging from 0.64 to 0.97. Measurement (9) showed fair correlation between the two observers. All landmark-to-landmark measurements displayed clinically acceptable 95% Bland–Altman LoA of ≤ 2 mm. The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for measurements (1) – (15). However, the Wilcoxon signed-rank test indicated a statistically significant difference for the deviation across all landmark-to-landmark distances (16) (Wilcoxon signed-rank test for paired samples; p < 0.001, n = 90; Table ). For volumetric assessments conducted by the two observers, excellent correlation was observed for measurements (17) (upper face), (18) (midface), and (20) (overall volume). The Wilcoxon signed-rank test revealed no statistically significant differences between the two observers for all volumetric measurements (Table ). However, all TrueDepth-based volumetric measurements exceeded the clinically acceptable 95% Bland–Altman LoA of ≤ 2 cc between the two observers (Table ). Figure displays the Bland–Altman plots for inter-observer reliability of TrueDepth-based measurements. The present study found an overall landmark-to-landmark deviation of M = 0.8 mm (SD = ± 0.58 mm, n = 450) for photogrammetry-based and M = 1.1 mm (SD = ± 0.72 mm, n = 450) for TrueDepth-based SMs (Table ). Both approaches remained within a clinically acceptable range, exhibiting an overall landmark-to-landmark deviation of ≤ 2 mm. Previous studies align with these findings, reporting a surface-to-surface deviation or landmark-to-landmark deviation of ≤ 2 mm . The mean RMS surface-to-surface deviation for comparable systems (iPhone 11 Pro and 3dMD system) was reported at 0.86 mm ± 0.31 mm by Andrews et al. . Their results indicated that midline points near the mouth and lips demonstrated less accurate results. Nightingale et al. reported an accuracy of 1.3 mm ± 0.3 mm between Artec-based and iPhone 8-based SMs . Seifert et al. observed mean landmark-to-landmark deviations of 1.27 mm for the application Polycam, 1.26 mm for ScandyPro, and 1.45 mm for EM3D when comparing SMs obtained from an iPhone 14 Pro to the 3dMD. They observed that the largest deviations occurred at the stomion for all applications, with values ranging from 1.65 mm for ScandyPro to 2.02 mm for EM3D . They concluded that capturing landmarks in highly flexible or variable facial regions, such as the orolabial region, poses greater challenges for smartphone-based 3D surface imaging. These findings align with the present study’s observations regarding the disparity observed in the left cheilion. Andrews et al. additionally noted that 97% of the distances between landmarks exhibited an average deviation of less than 2 mm. The current trial’s results confirm these findings, when comparing the landmark-to-landmark distances of both smartphone-based approaches to Vectra-based SMs. In contrast, Thurzo et al. observed that certain facial regions exhibited an accuracy of less than 3 mm, when assessing the accuracy of the Bellus3D Dental Pro app, utilizing the TrueDepth camera for facial 3D surface imaging . In particular, the authors identified lower accuracy in deeper structures, specifically in the orbital region, consistent with the observed trend in volumetric differences in the present study. The mid-face, encompassing the orbital region, exhibited the highest volumetric deviation between Vectra M5- and smartphone-based approaches. In this trial, the overall volumetric accuracy comparing photogrammetry-based SMs to Vectra-based SMs was calculated at M = 1.8 cc (SD = ± 2.12 cc, n = 90; Table ) and at M = 3.1 cc (SD = ± 2.64 cc, n = 90; Table ) for TrueDepth-based SMs compared to Vectra-based SMs. The overall accuracy reported in this study aligns with a study conducted by Farook et al., who found a volumetric discrepancy of 4.23 cc ± 2.28 cc comparing SMs of an ear cast obtained by the Oneplus-5t ( BBK Electronics, China ), the iPhone 6 s ( Apple Inc., USA) and the laser scanner 3D Scanner Ultra HD ( NextEngine, USA ) . However it is known, that the accuracy of anthropometric measurements may vary between smartphone applications, and the precision of SMs is influenced by the scanned object's color and shape . When conducting facial assessments, volumetric results may additionally be influenced by factors such as the inherent difficulty for participants to consistently maintain a neutral facial expression during 3D surface imaging . In addition, volumetric differences in smartphone-based 3D imaging depend on the overall measured volume, with previous investigations indicating an overall measurement error ranging from 0.67% to 3.19% . Further research may contribute to a broader comprehension of smartphones' capability to anticipate volumetric alterations in the facial region. A constraint of this study pertains to the manual extraction of FAOI, a method that may potentially limit the accuracy of the approach. However, it is essential for aligning the smartphone-based SMs with Vectra M5-based SMs. The procedure was performed consistently with previous investigations . Introducing automation in extracting FAOI could potentially address some of these limitations. General limitations of 3D surface imaging must also be considered, particularly when incorporating this technology into clinical routines. A critical aspect is the standardization of lighting conditions, as they significantly impact the accuracy of smartphone-based 3D surface imaging. Light reflections can potentially affect landmark detection on the SMs . To address this, the present study was conducted in a room with ambient lighting specifically designed for imaging patients with craniofacial deformities and orthognathic surgery needs. This controlled environment helped mitigate lighting-induced artifacts, a recommendation also supported by previous studies advocating the use of standardized lighting . Another limitation is patients' inherent difficulty in maintaining a neutral facial expression during 3D surface imaging. Previous studies have shown that subtle involuntary contractions in the facial muscles can affect the accuracy of facial landmark detection and volumetric data . Therefore, participants were instructed to maintain a neutral facial expression throughout the imaging process. Future research could investigate stabilization methods or incorporate real-time feedback systems to assist participants in maintaining a consistent facial posture during surface imaging. Additionally, it is noteworthy to examine the authors’ method of evaluating smartphone-based approaches in relation to their alignment with the gold standard. Aung et al. proposed that deviations exceeding 2 mm from the reference data are clinically unreliable, when comparing anthropometric measurements obtained from SMs generated by an optical surface scanner developed by the Medical Physics Department at the University College Hospital (London, UK) with direct anthropometric measurements . Therefore, this study defined measurements within ≤ 2 units of the reference as clinically acceptable. When applying smartphone-based surface imaging in OMFS, it is important to consider scenarios where deviations above 2 mm or 2 cc are insufficient. These include, among others, intraoperative instrument navigation or precise preoperative planning of orthognathic surgeries, where high accuracy is required for accurate segmentation and repositioning of maxillary or mandibular segments to avoid misalignment . It is important to acknowledge that smartphone-based methods can only be implemented in clinical workflows if certified as medical devices. The software used in this study was utilized in an experimental context and is not yet eligible for routine clinical application. While smartphone-based 3D surface imaging holds significant potential, addressing variability between smartphone models and software versions is essential to ensure the reliability and generalizability of this study’s results. Several studies have reported varying levels of accuracy across different devices and software . Clinicians are advised to critically evaluate the accuracy of smartphone-based surface imaging software and devices before integrating them into routine clinical workflows. Future studies should focus on standardization and cross-platform validation to enhance clinical applicability. In addition, the findings of this study regarding inter-observer reliability warrant further discussion. Photogrammetry-based measurements revealed significant differences in volumetric assessments, while TrueDepth-based measurements exceeded the 95% Bland–Altman LoA for inter-observer reliability. These discrepancies may be attributed to the study’s methodology, which required a second observer to manually relocate all landmarks, scale, and align all SMs, making it challenging to reproduce consistent volumetric measurements. This observation underscores the need for further software development to facilitate a fully automated smartphone-based approach, which could enhance reproducibility and ease of use in OMFS. While smartphone-based 3D surface imaging may not yet fully rival the capabilities of sophisticated 3D surface imaging systems, it can function as a supplementary tool for clinicians, facilitating communication between both patients and fellow healthcare professionals. As technology advances continuously, smartphones can emerge as powerful tools for both patients and surgeons in the future. This study comprehensively examines two smartphone-based methods for facial 3D surface imaging in alignment with the current gold standard. Smartphone-based approaches using both the TrueDepth camera and photogrammetry exhibited overall landmark-to-landmark distances of ≤ 2 mm that indicated clinically acceptable results in capturing facial features compared to the Vectra M5. Photogrammetry-based SMs generated by smartphones showed higher inter-observer reliability for overall landmark-to-landmark deviation, demonstrated superior alignment, and higher volumetric accuracy with the Vectra-based SMs compared to SMs generated by the TrueDepth camera. Smartphone-based facial 3D surface imaging emerges as a potent tool for clinicians, with oral and maxillofacial surgeons leading its adoption. |
Co-Designing Communication: A Design Thinking Approach Applied to Radon Health Communication | 1867b130-2254-4728-abef-226623c5e5e1 | 10048842 | Health Communication[mh] | Health intervention planning models emphasize the importance of participatory methods, thus involving community members and other relevant stakeholders in the different planning stages, from problem definition to intervention implementation . Not only does this increase the external validity of the intervention by the acceptance and acknowledgment of the input provided by the community, but it also provides broad perspectives and skills from community members, stakeholders, and the design team. Using the collective creativity of professionals and the local community in designing an intervention is referred to as co-design and can be seen as a citizen science approach . Although multiple citizen science projects were conducted within the field of radon, co-design methods have, to our knowledge, not yet been adopted in intervention design . Radon is an indoor air pollutant. It is a natural radioactive gas that is present in the soil in varying concentrations depending on the composition of the ground. Radon is invisible and has no scent, there are no visible casualties due to the gas, and since it is a natural gas, there is no culprit to blame . In high-risk areas, radon can enter houses through cracks or different installation tubes in the foundations of buildings, and the gas can accumulate indoors. Radon concentrations are one of the leading causes of lung cancer . Despite current health interventions, research shows that testing and mitigation rates remain insufficient . This raises the question of whether the current interventions tackle the right barriers and provide the right facilitators. Research specifically focused on (mass) communication interventions regarding radon has observed multiple gaps in the communication strategies adopted in the past. For instance, statistical information in leaflets or news articles prevails . To address these gaps, an exploratory co-design study was developed to first focus on general barriers and facilitators to perform radon protective behaviors and second on the ideation and designing of communication interventions, together with people with personal experience with radon. In this way, community members co-design a communication intervention, making it more personally relevant and likely more effective . 2.1. Health Interventions to Address Radon Exposure Changing behavior requires change on different levels; the behavior change wheel identifies capability, opportunity, and motivation as the main sources of behavior. Motivation reflects the individual, opportunity reflects the individual’s environment, and capability reflects a combination of the two. For behavior change to be effective and durable, the three components should be addressed with different types of interventions that often stem from the policy level . Looking at the policy level regarding radon, Europe adapted the Basic Safety Standards in 2013 and included radon protection as well . In practice, all European Member States are legally required to develop and implement a radon action plan containing information on ways to decrease radon levels at homes and workplaces. In the United States, the Indoor Radon Abatement Act (IRAA) from 1988 requires that indoor radon levels be as low as outdoors . These legislations, however, are on the highest level (namely the European level and the National level of the United States). The responsibility lies with the countries/states and their interpretation of their responsibility and legislation. Some countries/states, for instance, Estonia, only inform people about radon and place the responsibility for behavioral actions on the individual , whereas other countries, for instance, Ireland and Belgium, take the initial steps to include more specific legislation . Multiple scholars state that legislation procedures in terms of housing code requirements (comparable to energy efficiency) might increase the uptake for radon testing and mitigating , as is the case in certain States in The United States, for instance, Pennsylvania . On a European level, Austria is considering similar measures . Other policy measures are mostly concerned with reducing the economic impact of the testing and mitigating procedure—for instance, incentivizing mitigations, offering subventions, or providing free tests . A city in Ireland experimented with providing digital radon monitors in the library to facilitate the need for these monitors without the costs of buying them . Other countries, such as Bulgaria and the Czech Republic, provide free tests, and yet other countries (e.g., Belgium) sell tests at lowered prices during the heating season. Subventions for mitigation are also country-dependent; for instance, Austria, Germany, and Sweden provide financial support to those carrying out mitigation works . No real evidence is available on whether the financial aspect matters to people. Interestingly, focus groups in Ireland show that people who performed mitigation perceived the costs as not too high as it was an investment in their health. At the same time, people who did not mitigate (but had high levels of radon) perceived the costs as too high and an important barrier . Despite the interventions and measures in place, the uptake of radon protective behavior remains insufficient . It remains unclear whether the interventions in place address the barriers people experience and whether they create the right facilitating conditions. Therefore, there is a need to explore in more depth what barriers and facilitators people experience regarding radon-protective behavior. As radon is a multi-level problem, not only do the situational and the environmental factors matter, the responsibility of actually performing testing and mitigating often still lies with the individual homeowners . So, while creating the right environment for them to act is needed, they still must be motivated to act. One way to increase motivation is through communication and persuasion. Communication occurs on different levels, including interpersonal communication (e.g., an individual talking about radon with their general practitioner), stakeholder communication (e.g., general practitioners that are informed about radon on a higher level), and mass media communication (e.g., press articles about radon). A recent systematic review that focused on mass media communication about radon shows that campaigns mostly aim to increase awareness, knowledge, risk perception, and perceived susceptibility using factual communication in the form of brochures or press articles. The focus is on providing people with information about the characteristics of radon and the (technical) solutions. Although informative leaflets can be effective, they assume the full rationality of the audience, where they act upon the information they receive. The literature on behavior change has shown that people often experience bounded rationality and that other aspects, such as relevance, biases, and emotions, play an important part in the process . Other messages such as fear appeals in videos showed increased intention to request more information , and direct phone calls and letters increased intention to test . Moreover, while these communication interventions have shown to be effective to some level (e.g., low degree of increase in testing behavior), the next step, namely mitigation, remains mainly unchanged , which identifies an additional gap. In particular, Hevey identified 17 steps of behavior, from becoming informed about radon to having confirmed mitigation . However, communication interventions rarely move along these steps. The precaution adoption process model is a theory based on the different stages of behavior, from being unaware of the problem to maintaining the problem. The theory emphasizes that different stages require different communication approaches. For instance, to move from the first stage (unaware) to the second stage (unengaged), media messages about the hazards are needed, while in progressing from the second stage to the third (undecided), testimonials and personal experiences are most effective. Further, to proceed from the third stage to the fourth (decided not to act) or to the fifth (decided to act), information about personal susceptibility, likelihood, and severity of radon exposure is effective. Detailed information about ways to perform the behavior, the costs, and the resources are mainly effective when moving from the fifth stage to the final stage (maintenance) . Overall, the systematic review showed a need for more personally relevant communication efforts, as the question remains whether and to what extent the current communication approaches tackle the right determinants at the right moment and are in line with the needs of the public . This unveils the need to inquire about the their preferences of the target group regarding radon-related communication. 2.2. Co-Design in Health Interventions on Radon To answer these questions, we need to engage in dialogue with the target group themselves and, even more so, involve them actively in developing communication tools. Participatory designs include various methods; however, the mean denominator is the active engagement of the public. Different levels exist within participatory designs, from providing information (one-way) to a discussion (two-way) and active participation (multiple ways), which is the highest level of involvement. The latter often results in participatory decision-making and co-design of new products, technologies, or health interventions . Within the existing research about the health issues related to radon, participatory designs or citizen science projects have been adopted previously . The main topic investigated in previous studies was the understanding of the lack of mitigating behavior, either through interviews (i.e., providing the information) or through discussing the topic in focus groups (i.e., discussion) . Citizen science projects were related to, for instance, raising awareness, radon mapping, or radon testing and mitigating . To our knowledge, ours is the first study applying active participation in the design process of a communication intervention in the context of radon. More specifically, our study was designed to involve residents and homeowners in understanding the lack of radon protective behaviors and related general barriers and facilitators and considering solutions regarding communication campaigns. To investigate these aspects, we opted for design thinking. This participatory design framework allows for opening up the problem and inviting people to think along to identify it and create solutions based on their first-hand experiences . It is a way of creative problem-solving that is human-centered and emphasizes observation, collaboration, and visualization of ideas. It emphasizes empathizing with the issue and the context of the issue, defining the exact problem and challenge, ideating ways to solve the challenge, and testing prototypes to do so . This method, both problem- and solution-oriented, can provide new insights into why people avoid radon protective behaviors, what they think the solution would be, and even what the solution should look like. To summarize, two questions are raised: first, what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions, if at all? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? Changing behavior requires change on different levels; the behavior change wheel identifies capability, opportunity, and motivation as the main sources of behavior. Motivation reflects the individual, opportunity reflects the individual’s environment, and capability reflects a combination of the two. For behavior change to be effective and durable, the three components should be addressed with different types of interventions that often stem from the policy level . Looking at the policy level regarding radon, Europe adapted the Basic Safety Standards in 2013 and included radon protection as well . In practice, all European Member States are legally required to develop and implement a radon action plan containing information on ways to decrease radon levels at homes and workplaces. In the United States, the Indoor Radon Abatement Act (IRAA) from 1988 requires that indoor radon levels be as low as outdoors . These legislations, however, are on the highest level (namely the European level and the National level of the United States). The responsibility lies with the countries/states and their interpretation of their responsibility and legislation. Some countries/states, for instance, Estonia, only inform people about radon and place the responsibility for behavioral actions on the individual , whereas other countries, for instance, Ireland and Belgium, take the initial steps to include more specific legislation . Multiple scholars state that legislation procedures in terms of housing code requirements (comparable to energy efficiency) might increase the uptake for radon testing and mitigating , as is the case in certain States in The United States, for instance, Pennsylvania . On a European level, Austria is considering similar measures . Other policy measures are mostly concerned with reducing the economic impact of the testing and mitigating procedure—for instance, incentivizing mitigations, offering subventions, or providing free tests . A city in Ireland experimented with providing digital radon monitors in the library to facilitate the need for these monitors without the costs of buying them . Other countries, such as Bulgaria and the Czech Republic, provide free tests, and yet other countries (e.g., Belgium) sell tests at lowered prices during the heating season. Subventions for mitigation are also country-dependent; for instance, Austria, Germany, and Sweden provide financial support to those carrying out mitigation works . No real evidence is available on whether the financial aspect matters to people. Interestingly, focus groups in Ireland show that people who performed mitigation perceived the costs as not too high as it was an investment in their health. At the same time, people who did not mitigate (but had high levels of radon) perceived the costs as too high and an important barrier . Despite the interventions and measures in place, the uptake of radon protective behavior remains insufficient . It remains unclear whether the interventions in place address the barriers people experience and whether they create the right facilitating conditions. Therefore, there is a need to explore in more depth what barriers and facilitators people experience regarding radon-protective behavior. As radon is a multi-level problem, not only do the situational and the environmental factors matter, the responsibility of actually performing testing and mitigating often still lies with the individual homeowners . So, while creating the right environment for them to act is needed, they still must be motivated to act. One way to increase motivation is through communication and persuasion. Communication occurs on different levels, including interpersonal communication (e.g., an individual talking about radon with their general practitioner), stakeholder communication (e.g., general practitioners that are informed about radon on a higher level), and mass media communication (e.g., press articles about radon). A recent systematic review that focused on mass media communication about radon shows that campaigns mostly aim to increase awareness, knowledge, risk perception, and perceived susceptibility using factual communication in the form of brochures or press articles. The focus is on providing people with information about the characteristics of radon and the (technical) solutions. Although informative leaflets can be effective, they assume the full rationality of the audience, where they act upon the information they receive. The literature on behavior change has shown that people often experience bounded rationality and that other aspects, such as relevance, biases, and emotions, play an important part in the process . Other messages such as fear appeals in videos showed increased intention to request more information , and direct phone calls and letters increased intention to test . Moreover, while these communication interventions have shown to be effective to some level (e.g., low degree of increase in testing behavior), the next step, namely mitigation, remains mainly unchanged , which identifies an additional gap. In particular, Hevey identified 17 steps of behavior, from becoming informed about radon to having confirmed mitigation . However, communication interventions rarely move along these steps. The precaution adoption process model is a theory based on the different stages of behavior, from being unaware of the problem to maintaining the problem. The theory emphasizes that different stages require different communication approaches. For instance, to move from the first stage (unaware) to the second stage (unengaged), media messages about the hazards are needed, while in progressing from the second stage to the third (undecided), testimonials and personal experiences are most effective. Further, to proceed from the third stage to the fourth (decided not to act) or to the fifth (decided to act), information about personal susceptibility, likelihood, and severity of radon exposure is effective. Detailed information about ways to perform the behavior, the costs, and the resources are mainly effective when moving from the fifth stage to the final stage (maintenance) . Overall, the systematic review showed a need for more personally relevant communication efforts, as the question remains whether and to what extent the current communication approaches tackle the right determinants at the right moment and are in line with the needs of the public . This unveils the need to inquire about the their preferences of the target group regarding radon-related communication. To answer these questions, we need to engage in dialogue with the target group themselves and, even more so, involve them actively in developing communication tools. Participatory designs include various methods; however, the mean denominator is the active engagement of the public. Different levels exist within participatory designs, from providing information (one-way) to a discussion (two-way) and active participation (multiple ways), which is the highest level of involvement. The latter often results in participatory decision-making and co-design of new products, technologies, or health interventions . Within the existing research about the health issues related to radon, participatory designs or citizen science projects have been adopted previously . The main topic investigated in previous studies was the understanding of the lack of mitigating behavior, either through interviews (i.e., providing the information) or through discussing the topic in focus groups (i.e., discussion) . Citizen science projects were related to, for instance, raising awareness, radon mapping, or radon testing and mitigating . To our knowledge, ours is the first study applying active participation in the design process of a communication intervention in the context of radon. More specifically, our study was designed to involve residents and homeowners in understanding the lack of radon protective behaviors and related general barriers and facilitators and considering solutions regarding communication campaigns. To investigate these aspects, we opted for design thinking. This participatory design framework allows for opening up the problem and inviting people to think along to identify it and create solutions based on their first-hand experiences . It is a way of creative problem-solving that is human-centered and emphasizes observation, collaboration, and visualization of ideas. It emphasizes empathizing with the issue and the context of the issue, defining the exact problem and challenge, ideating ways to solve the challenge, and testing prototypes to do so . This method, both problem- and solution-oriented, can provide new insights into why people avoid radon protective behaviors, what they think the solution would be, and even what the solution should look like. To summarize, two questions are raised: first, what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions, if at all? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? To apply the participatory design, we composed a research team comprising researchers from different disciplines, such as risk communication, health communication, sociology, nuclear physics, and citizen science. This ensured the avoidance of conceptual bias. Most researchers of the team had expertise with qualitative methods and radon research; however, none had operational expertise in design thinking as a research method. Therefore, the research protocol was developed in collaboration with a Belgian company specializing in design thinking (ACOMPANY). The company also provided a full training day of the method for all researchers involved in this study. 3.1. Participants The aim was to recruit participants who already had some experience with radon so that they could speak from their own experiences rather than a hypothetical scenario. This meant that we recruited people who had already measured (high) radon levels. 3.2. Workshop Design A workshop was designed that consisted of two unstructured group sessions. Each session lasted two hours and was scheduled a week apart. More specifically, the framework of the double diamond was applied to the context of radon and the workshop design itself . The first stage of this framework, as seen in , is the challenge, which is the starting point of the workshops and describes the ideal scenario. For this research project, the challenge was defined as “would it not be nice if all houses were radon-free,” referring to the ideal scenario where radon protective behavior is performed and facilitated easily among all homeowners in radon-prone areas. In the first session, the participants used this challenge to consider why houses are not already radon-free. In other words, “would it not be nice if all houses were radon-free” was the initial prompt to discuss barriers and facilitators in the first session. Since the participants all had experience with radon, this prompt was understandable for the participants as a starting point. Participants recorded all the problems (i.e., barriers) that arose on post-it notes while discussing them. These problem statements could relate to the causes of the challenge, the importance, the target audience, and other related issues, specifically in the form of “how-to questions.” This stems from the concept of how to ensure that all houses are radon-free, formulating a barrier as a facilitator; for instance, “how to make people aware” (i.e., facilitator) refers to the lack of awareness (i.e., barrier). Once saturation was reached and no new problems were added, dot-voting allowed for defining the most pressing problem statements. In other words, the first session discovered the why of the main challenge. Between the first and second sessions, the problem was defined further. In this case, the problem definition for the second session was “how to improve radon communication.” In the workshop’s second session, this was used as the prompt to start the discussion, together with the main findings from the first session. In this session, the focus was on ideation and brainstorming. The participants discussed potential radon communication strategies, selected the ones they considered the best, and started to develop protocols for the materials, which led to a communication strategy. This session explored the how of the main challenge. Both sessions aimed to diverge first (i.e., creating options) and converge afterward (i.e., selecting options). One of the tools often used in design thinking approaches is developing a customer journey, which indicates all the steps between being aware and purchasing a product or even becoming an ambassador (i.e., as a customer actively promoting the product among peers). Based on the precaution adoption process model and the 17 steps of radon behavior developed by Hevey , a homeowner journey was developed before the workshops. Seven steps were identified: awareness, evaluation of the knowledge (i.e., engagement with the health issue), purchase of radon test kit, delivery and conducting radon test, action (i.e., mitigating home), reassuring (i.e., confirming successful mitigation by re-testing), and ambassadorship (i.e., convincing others about the importance of radon tests). For every step, barriers, motivations, emotional states, and actions were identified. Developing the homeowner journey ensured a complete overview of the available literature about radon behavior. The full homeowner journey can be found in . If the discussion dtalled, the homeowner journey was an additional prompt during the first sessions. The workshops were conducted in Belgium and Slovenia. 3.3. Workshop 1: Belgium Effects of radon are a significant health problem in Belgium. Approximately 48% of the Walloon region in Belgium is expected to be affected by radon . Radon likely contributes to approximately 480 deaths due to lung cancer per year . To prevent this, approximately 36.000 dwellings need to be mitigated . The Federal Agency of Nuclear Control (FANC) is responsible for organizing activities to apply the regulations, comply with the obligations, and raise awareness of the actors involved in radon. Therefore, FANC strives for close collaboration with multiple actors, such as the provinces, municipalities, professional organizations, academic institutions, and the public. While exposure to radon at work is regulated and the employer is responsible for mitigating the working place, mitigation of dwellings is not legally required. It remains the responsibility of the homeowner . To increase the number of radon tests in dwellings, regional authorities contribute to radon test kits, which means that the price for a test kit is reduced from 30 euros to 15 euros. Financial help from the regional government for mitigation actions is also in place. The mitigation of a dwelling in Belgium costs between 500 euros and 5000 euros. Lists of companies with expertise in radon mitigation are published online . A communication plan was defined in 2014 and is updated yearly based on the evaluation of the past year to improve awareness and increase mitigation rates. In this context, a dedicated internet page was established. The effectiveness of the communication interventions is evaluated for the most impactful activities, such as orders of test kits. Other measures such as reach (e.g., visits to internet pages) and media return are also evaluated. FANC also tested social advertising in 2021 (paid ads on Twitter). However, this campaign was not further evaluated. The results of a public opinion survey show that 32% of the population are aware of radon and that 11% of them applied some mitigation measure in their home . The first workshop was conducted in March 2022 in Belgium. Due to COVID-19 restrictions, both sessions occurred online. An online whiteboard was used as an online alternative to physical post-its. 3.3.1. Sample Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. 3.3.2. Facilitation Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. 3.4. Workshop 2: Slovenia Due to its geology, Slovenia has many municipalities heavily influenced by radon. It is estimated that 100 people per year die due to lung cancer caused by radon . To prevent radon-related deaths, the Slovenian Radiation Protection Administration is responsible for the Radon Action Plan . Through online and face-to-face meetings, it consults with all ministries involved with radon, including the Ministry of Health and Ministry of Environment, Technical Support Organizations, and Education. Free measurements for dwellings are available for residents in radon-risk areas; however, the number of available tests is limited. The average mitigation costs for standard dwelling amount to approximately a few thousand euros. Target groups of communication interventions are employers, employees, local decision-makers, and the public in general. Communication interventions are focused on increasing awareness and are mainly developed in the form of brochures. Other strategies include news articles, seminars, expert meetings, workshops, and a comic book for children . Perko and Turcanu determined that the frequency of personal advice, dialogue, and response to radon-related questions and concerns of residents are very good in Slovenia compared to other European countries . The effectiveness of the communication interventions is not measured, and objective radon awareness measurements among residents are unavailable. In May 2022, the second workshop occurred face-to-face in Slovenia. The recruitment was also conducted through local authorities; however, it was also picked up by local media, such as the local radio and newspaper. 3.4.1. Sample The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. 3.4.2. Facilitation The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. 3.5. Data Analysis Both workshops were recorded and transcribed according to the ethical guidelines of the social sciences. The research team conducted an inductive thematic analysis, adopting a semantical approach. The participants recorded their main thoughts regarding the barriers, facilitators, and communication approaches on post-it notes. Therefore, their views, opinions, and experiences were made explicit, hence the semantic approach. These post-it notes were used to code the transcripts to provide more background information. After each session, these post-it notes (i.e., codes) were categorized thematically by the research team, until a consensus was reached. Since the approach was to explore the barriers, facilitators, and communication ideas, no pre-defined codebook was used. The aim was to recruit participants who already had some experience with radon so that they could speak from their own experiences rather than a hypothetical scenario. This meant that we recruited people who had already measured (high) radon levels. A workshop was designed that consisted of two unstructured group sessions. Each session lasted two hours and was scheduled a week apart. More specifically, the framework of the double diamond was applied to the context of radon and the workshop design itself . The first stage of this framework, as seen in , is the challenge, which is the starting point of the workshops and describes the ideal scenario. For this research project, the challenge was defined as “would it not be nice if all houses were radon-free,” referring to the ideal scenario where radon protective behavior is performed and facilitated easily among all homeowners in radon-prone areas. In the first session, the participants used this challenge to consider why houses are not already radon-free. In other words, “would it not be nice if all houses were radon-free” was the initial prompt to discuss barriers and facilitators in the first session. Since the participants all had experience with radon, this prompt was understandable for the participants as a starting point. Participants recorded all the problems (i.e., barriers) that arose on post-it notes while discussing them. These problem statements could relate to the causes of the challenge, the importance, the target audience, and other related issues, specifically in the form of “how-to questions.” This stems from the concept of how to ensure that all houses are radon-free, formulating a barrier as a facilitator; for instance, “how to make people aware” (i.e., facilitator) refers to the lack of awareness (i.e., barrier). Once saturation was reached and no new problems were added, dot-voting allowed for defining the most pressing problem statements. In other words, the first session discovered the why of the main challenge. Between the first and second sessions, the problem was defined further. In this case, the problem definition for the second session was “how to improve radon communication.” In the workshop’s second session, this was used as the prompt to start the discussion, together with the main findings from the first session. In this session, the focus was on ideation and brainstorming. The participants discussed potential radon communication strategies, selected the ones they considered the best, and started to develop protocols for the materials, which led to a communication strategy. This session explored the how of the main challenge. Both sessions aimed to diverge first (i.e., creating options) and converge afterward (i.e., selecting options). One of the tools often used in design thinking approaches is developing a customer journey, which indicates all the steps between being aware and purchasing a product or even becoming an ambassador (i.e., as a customer actively promoting the product among peers). Based on the precaution adoption process model and the 17 steps of radon behavior developed by Hevey , a homeowner journey was developed before the workshops. Seven steps were identified: awareness, evaluation of the knowledge (i.e., engagement with the health issue), purchase of radon test kit, delivery and conducting radon test, action (i.e., mitigating home), reassuring (i.e., confirming successful mitigation by re-testing), and ambassadorship (i.e., convincing others about the importance of radon tests). For every step, barriers, motivations, emotional states, and actions were identified. Developing the homeowner journey ensured a complete overview of the available literature about radon behavior. The full homeowner journey can be found in . If the discussion dtalled, the homeowner journey was an additional prompt during the first sessions. The workshops were conducted in Belgium and Slovenia. Effects of radon are a significant health problem in Belgium. Approximately 48% of the Walloon region in Belgium is expected to be affected by radon . Radon likely contributes to approximately 480 deaths due to lung cancer per year . To prevent this, approximately 36.000 dwellings need to be mitigated . The Federal Agency of Nuclear Control (FANC) is responsible for organizing activities to apply the regulations, comply with the obligations, and raise awareness of the actors involved in radon. Therefore, FANC strives for close collaboration with multiple actors, such as the provinces, municipalities, professional organizations, academic institutions, and the public. While exposure to radon at work is regulated and the employer is responsible for mitigating the working place, mitigation of dwellings is not legally required. It remains the responsibility of the homeowner . To increase the number of radon tests in dwellings, regional authorities contribute to radon test kits, which means that the price for a test kit is reduced from 30 euros to 15 euros. Financial help from the regional government for mitigation actions is also in place. The mitigation of a dwelling in Belgium costs between 500 euros and 5000 euros. Lists of companies with expertise in radon mitigation are published online . A communication plan was defined in 2014 and is updated yearly based on the evaluation of the past year to improve awareness and increase mitigation rates. In this context, a dedicated internet page was established. The effectiveness of the communication interventions is evaluated for the most impactful activities, such as orders of test kits. Other measures such as reach (e.g., visits to internet pages) and media return are also evaluated. FANC also tested social advertising in 2021 (paid ads on Twitter). However, this campaign was not further evaluated. The results of a public opinion survey show that 32% of the population are aware of radon and that 11% of them applied some mitigation measure in their home . The first workshop was conducted in March 2022 in Belgium. Due to COVID-19 restrictions, both sessions occurred online. An online whiteboard was used as an online alternative to physical post-its. 3.3.1. Sample Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. 3.3.2. Facilitation Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. Due to its geology, Slovenia has many municipalities heavily influenced by radon. It is estimated that 100 people per year die due to lung cancer caused by radon . To prevent radon-related deaths, the Slovenian Radiation Protection Administration is responsible for the Radon Action Plan . Through online and face-to-face meetings, it consults with all ministries involved with radon, including the Ministry of Health and Ministry of Environment, Technical Support Organizations, and Education. Free measurements for dwellings are available for residents in radon-risk areas; however, the number of available tests is limited. The average mitigation costs for standard dwelling amount to approximately a few thousand euros. Target groups of communication interventions are employers, employees, local decision-makers, and the public in general. Communication interventions are focused on increasing awareness and are mainly developed in the form of brochures. Other strategies include news articles, seminars, expert meetings, workshops, and a comic book for children . Perko and Turcanu determined that the frequency of personal advice, dialogue, and response to radon-related questions and concerns of residents are very good in Slovenia compared to other European countries . The effectiveness of the communication interventions is not measured, and objective radon awareness measurements among residents are unavailable. In May 2022, the second workshop occurred face-to-face in Slovenia. The recruitment was also conducted through local authorities; however, it was also picked up by local media, such as the local radio and newspaper. 3.4.1. Sample The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. 3.4.2. Facilitation The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. Both workshops were recorded and transcribed according to the ethical guidelines of the social sciences. The research team conducted an inductive thematic analysis, adopting a semantical approach. The participants recorded their main thoughts regarding the barriers, facilitators, and communication approaches on post-it notes. Therefore, their views, opinions, and experiences were made explicit, hence the semantic approach. These post-it notes were used to code the transcripts to provide more background information. After each session, these post-it notes (i.e., codes) were categorized thematically by the research team, until a consensus was reached. Since the approach was to explore the barriers, facilitators, and communication ideas, no pre-defined codebook was used. 4.1. Workshop 1: Belgium (Online) 4.1.1. Session 1: Problem Statements The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. 4.1.2. Session 2: Solution Statements In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. 4.2. Workshop 2: Slovenia (Face-to-Face) 4.2.1. Session 1: Problem Statements Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? 4.2.2. Session 2: Solution Statements For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. 4.1.1. Session 1: Problem Statements The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. 4.1.2. Session 2: Solution Statements In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. 4.2.1. Session 1: Problem Statements Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? 4.2.2. Session 2: Solution Statements For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. By setting up a qualitative co-design workshop with homeowners, we aimed at gaining more in-depth knowledge about the barriers that people experience in mitigating their house, on the one hand, and collecting their creative input and insights about ways to communicate the dangers of radon could be improved on the other. First of all, the results show that the barriers people experience are situated within different levels of interventions and different steps of behavior, as described in the literature review. The stages discussed in this section are simplified and focus on awareness, testing, and mitigating behavior for clarification purposes. Barriers related to the first stages of behavior were focused on a lack of awareness and engaging communication. The participants agreed that awareness should be the first step. In Belgium, the focus was placed on more attention-grabbing awareness campaigns, such as social media campaigns and humor, while Slovenia focused on personalized letters. This is in line with the research of Weinstein et al., where they tested whether personalized phone calls and letters affected perceived susceptibility and self-protective behavior (i.e., intention to test). They determined that personal susceptibility did increase significantly for those who received the phone call and the letter; however, no differences were detected in terms of intention to test. This could indicate that the proposed letters by the participants could successfully increase engagement with the health topic, yet that other communication strategies are needed to address the further steps in the mitigating process . These results also show the nuance of the concept of awareness, where a discrepancy between being aware and making a personal risk assessment remains. As Poortinga et al. reported, high levels of awareness do not always result in higher levels of concern; therefore, raising awareness could be focused more on grabbing attention and raising curiosity rather than merely informing. Barriers associated with testing behavior include the lack of available active and passive dosimeters in Slovenia. According to the participants, communication in this stage should be more specific than in the awareness stage; for instance, a comprehensive website with information, workshops, or newspaper articles would provide them with the information they need without overwhelming. Moreover, information from different stakeholders, such as medical doctors, could help emphasize the importance of radon testing. Apart from the accessibility of tests in Slovenia, no issues were mentioned regarding the costs of test kits. When examining the next stage, it can be observed that many barriers are related to mitigating behavior. Participants highlighted the importance of personalized advice after testing, with a clear step-wise approach on what steps to take next and how to do so. Finding mitigating companies with radon experience was challenging, according to the participants in Belgium and Slovenia. Moreover, the lack of guaranteed results after mitigation was a particularly important barrier in Belgium. Participants indicated that this had to be the state’s responsibility to implement regulations for these companies, as that would facilitate the process of the homeowners finding the best help for their particular radon problem. This could be achieved by certifying certain mitigating companies or involving inspections at mitigating companies, as proposed by the participants. Further, the financial burden of mitigation was mentioned in both workshops, emphasizing the need for subventions or financial aid from the government. Regarding mitigation behavior, the participants indicated a need for communication on different levels, for instance, stakeholder communication. They felt the involved stakeholders (e.g., medical professionals, mitigating companies, local authorities) are not sufficiently up to date in helping homeowners accordingly with radon issues. Especially in this stage, participants expressed a need for detailed and clear information, and both countries suggested using testimonials. The participants emphasized that the testimonial should contain a story of someone who mitigated their house or what lessons could be learned from unsuccessful mitigations. In that way, both the problem and the solution were addressed. This idea is already supported by the literature on narratives, stating that narratives could help in facilitating information processing, comprehension, and recall . Overarching barriers were related to legislation and regulation. On the policy level, both workshops showed a need for obligatory radon measures in new buildings; moreover, a need for a European standard was also expressed in Belgium. Despite the European Basic Safety Standards and the inclusion of radon measures in the building permit in Belgium, participants still expressed these aspects as a need for future policy-level interventions (Council Directive 2013/59/EURATOM, of 5 December 2013). This aligns with the current policy measures; however, the policy must be implemented sufficiently to impact homeowners’ barriers. Further, policy changes in adding radon levels to the energy certificate to regulate radon levels in the housing market were proposed, which agrees with previous research on mitigating . Regarding communication, the participants highlighted the need for a holistic step-wise approach, where communication follows the different stages of behavior and a consistent message is conveyed across stakeholders, channels, and time. Generally, it is important to note that behavior change will only occur if the environment is ready. In other words, barriers related to, for instance, the availability of dosimeters and mitigation companies should be addressed first before communicating about the health risks to ensure fitting solutions are available. This study indicated that co-design workshops and participatory research are crucial to gaining the users’ perspectives and ideas early in the intervention design. The face-to-face workshop was preferred when comparing both workshops, especially since this setting increased the group dynamic and collaboration efforts. The online format was, given the circumstances, still valuable in understanding the barriers and collaborating on communication ideas, yet a face-to-face setting was needed to conduct an even more in-depth inquiry. Design thinking workshops have shown to be valuable in the intervention design process related to radon; however, other health topics could and should also be addressed with participatory methods, such as design thinking, early on to maximize the involvement and input of the target group. 5.1. Limitations Just like any study, this study also experienced some limitations. Ideally, both workshops would be conducted in a face-to-face setting instead of the online setting in Belgium. This would facilitate even more creativity and sharing experiences among the participants. Moreover, recruitment challenges limited us to one workshop with two sessions in each country. Although we gained many new perspectives and ideas, more workshops with more participants would allow for saturation among the population instead of saturation among the sample. Regarding the sample of these workshops, we focused on homeowners that had measured (high) radon levels in their homes. Although this was the purpose of the study, it created selection bias. 5.2. Future Research Future research should explore more participatory research designs, both in intervention design research and radon health communication, emphasizing different social categories and countries. Moreover, scholars could investigate more comprehensive communication strategies with adapted messages depending on the sample’s behavior change stage. Finally, researchers could explore the ideas provided by the participants further in terms of theoretical framework, but also in terms of effectiveness in a lab setting. Just like any study, this study also experienced some limitations. Ideally, both workshops would be conducted in a face-to-face setting instead of the online setting in Belgium. This would facilitate even more creativity and sharing experiences among the participants. Moreover, recruitment challenges limited us to one workshop with two sessions in each country. Although we gained many new perspectives and ideas, more workshops with more participants would allow for saturation among the population instead of saturation among the sample. Regarding the sample of these workshops, we focused on homeowners that had measured (high) radon levels in their homes. Although this was the purpose of the study, it created selection bias. Future research should explore more participatory research designs, both in intervention design research and radon health communication, emphasizing different social categories and countries. Moreover, scholars could investigate more comprehensive communication strategies with adapted messages depending on the sample’s behavior change stage. Finally, researchers could explore the ideas provided by the participants further in terms of theoretical framework, but also in terms of effectiveness in a lab setting. In this study, the questions were raised: what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? To investigate these questions, we designed a participatory co-design research method with homeowners in Belgium and Slovenia. The findings of these workshops show that participants require more policy and legislation, for instance, about certifying mitigation companies or including radon measurement on the energy certificate. Moreover, they experience a need for support from the state during radon testing and mitigating procedures, both in terms of financial aid and communication or advice. Furthermore, they indicated a need for more awareness among the general public and, more specifically, a lack of engagement. A holistic communication approach is also needed, including by stakeholders such as general practitioners and architects. When looking at communication specifically, both workshops suggested that communication strategies should be amended to match the stage from awareness to having a radon-safe home. Communication tools such as radio spots with humor or personalized letters to raise awareness and engagement were proposed. Further, testimonials were pointed out as an effective way to highlight the issues and solutions of people who reported similar experiences. Further research should adopt co-design methods, both in research about radon health communication and in different fields. Further, scholars could test the effectiveness of some of these ideas in a controlled setting and in an integrated, multi-stage intervention. |
Characterizing brain dynamics during ketamine-induced dissociation and subsequent interactions with propofol using human intracranial neurophysiology | 0145d0a7-efc5-42bc-a1d3-20481c40a8fa | 10060225 | Physiology[mh] | Ketamine is a dissociative anesthetic that has both anesthetic and psychoactive properties , . Intravenous induction doses (1–2 mg/kg) of ketamine result in a rapid loss of consciousness appropriate for general anesthesia , . At subanesthetic doses (0.5 mg/kg), ketamine produces a dissociative state, which includes gaps in memory, out-of-body experiences, and altered sensory perception – . In addition, intravenous administration of a subanesthetic dose of ketamine induces significant and rapid antidepressant-like response in depressed patients . Although ketamine was approved by the Food and Drug Administration (FDA) for adult patients with treatment-resistant depression , the neuropsychiatric side effects have limited its extensive use in clinical practice , . Defining the neural circuits engaged in ketamine’s rapid antidepressant and dissociative effects is an important priority that could facilitate the development of improved therapies with fewer side effects and greater safety. Ketamine is known to induce profound changes in brain oscillatory dynamics that appear to be correlated with its antidepressant and sensory dissociative activity , – . The electrophysiologic profile of subanesthetic ketamine in humans generally includes an increase of gamma oscillation power and a decrease of delta, alpha, and beta oscillation power , – . Oscillatory power changes have also been reported in patients with depression and have been used to differentiate depressive from healthy subjects . However, the relationships between these changes in oscillatory power and the neural circuit mechanisms of depression and dissociation are not well-understood. Previous studies suggest that at subanesthetic doses, ketamine preferentially blocks the NMDA receptors on GABAergic inhibitory interneurons, resulting in the disinhibition of downstream excitatory pyramidal neurons that is thought to facilitate increased gamma-band activity – . When GABA A agonists, such as benzodiazepines, are administered alongside ketamine, they mitigate dissociations, possibly by restoring inhibitory activity in the affected brain regions , . In addition, ketamine inhibits the hyperpolarization-activated cyclic nucleotide-gated potassium channel 1 (HCN1), a molecular target that is thought to play an important role in generating rhythmic EEG activity and is considered a novel therapeutic target for depressive disorders – . Studies have been conducted to investigate which cortical or subcortical structures play a major role in mediating this process. Previous work has showed that ketamine’s antidepressant effects are largely dependent upon its actions within the prefrontal cortex and the hippocampus . On the other hand, the reduction of alpha oscillations in the precuneus and temporal-parietal junction and the 3 Hz rhythm in the deep posteromedial cortex (PMC), as studied in rodents and a human patient, have been proposed as mechanisms for ketamine-induced dissociation , , . Functional connectivity analysis with fMRI and EEG suggest that ketamine disrupts the frontoparietal default mode network connectivity , . Although ketamine’s antidepressive and dissociative effects are known to co-occur whenever the drug is administered, these effects may in fact be mediated by distinct mechanisms within distinct neural circuits. If that were true, it might be possible to design novel therapeutics with greater specificity and fewer side effects. In this study, we measured intracranial EEG (iEEG) in human patients implanted with intracranial electrodes who were administered a subanesthetic dose of ketamine prior to induction of general anesthesia with propofol for electrode removal surgery. Our goal was to characterize the brain regions involved in different ketamine-induced rhythms in order to better understand their potential role in mediating ketamine’s dissociative and antidepressant properties. In addition to characterizing changes in canonical frequency bands associated with subanesthetic ketamine, we also looked for evidence of a 3 Hz rhythm recently implicated in ketamine-induced dissociation . To characterize the potential role of NMDA and HCN1 receptors in producing ketamine-induced oscillations, we analyzed the interactions between subanesthetic ketamine and propofol. Propofol is a positive GABA allosteric modulator and HCN1 blocker , . Propofol’s GABAergic activity would be expected to antagonize any ketamine-induced oscillations stemming from NMDA-mediated disinhibition. At the same time, propofol would be expected to further potentiate any ketamine-induced oscillations originating from HCN1 inhibition. We collected data from 10 epilepsy patients implanted with intracranial depth electrodes to identify sites of epileptogenic origin (Table , Supplementary Fig. and Supplementary ). The responses on the abbreviated Clinician-Administered Dissociative States Scale (CADSS) – questionnaire (Supplementary Fig. ) are summarized in Supplementary Table . The responses on the questionnaire are consistent with a dissociative state induced by subanesthetic ketamine. Ketamine and propofol-induced location- and frequency-dependent iEEG dynamics We observed distinct dynamic patterns in the iEEG after ketamine infusion, which changed after the administration of propofol. Figure shows the spectrogram and power spectra for 3 channels in the inferior frontal, middle temporal, and occipital cortices from an example subject. The spectrograms for other subjects are shown in Supplementary Fig. . Under ketamine, we observed increased gamma power (25–55 Hz) in the inferior frontal channel and decreased alpha power (8–15 Hz) in the middle temporal and occipital channels. After propofol was added, there was a large increase of power in the inferior frontal and middle temporal channels for nearly all frequencies, except for upper gamma band (40–55 Hz). In contrast, the reduction of alpha oscillations in the occipital channels was further enhanced with the addition of propofol. These results suggest that the iEEG dynamics induced by ketamine and propofol are location- and frequency-dependent. To understand how these brain dynamics mapped to different brain structures, we analyzed the changes in power for different cortical and subcortical structures, first after ketamine infusion and then after the addition of propofol. Ketamine induced an increase in gamma oscillation power and a reduction of low-frequency oscillation power We analyzed the changes in iEEG dynamics for different brain structures after ketamine infusion (Fig. , Supplementary Table and Supplementary Fig. ). For gamma frequencies (25-55 Hz), a greater than 100 dB increase in mean power after ketamine infusion compared with baseline was detected in frontal structures, which include the anterior and posterior cingulate (159.04 dB), superior frontal (153.03 dB), middle frontal (153.59 dB), orbitofrontal (133.68 dB), and inferior frontal (149.20 dB) areas. The mean power increase in precentral, postcentral, isthmus cingulate, temporal structures, lingual, pericalcarine, hippocampus, amygdala, striatum, and insula, was between 19.07 and 96.37 dB. A decrease in mean gamma power was detected in occipital channels (−42.96 dB). For beta frequencies (15-25 Hz), while an increase in power was detected in hippocampus and amygdala (4.43 dB), a decrease in power was detected for middle frontal (−14.26 dB), precentral (−18.21 dB), postcentral (−36.00 dB), isthmus cingulate (−10.53 dB), parietal (−15.90 dB) and temporal structures (−8.40 dB), as well as the lingual and pericalcarine (−19.58 dB), and the occipital cortices (−43.81 dB). No other structural labels showed changes in power after ketamine infusion (i.e., confidence intervals overlapped zero). For alpha frequencies (8-15 Hz), the decrease of mean alpha power was observed for nearly all structure labels with the largest reduction in postcentral (−33.55 dB) and occipital cortices (−32.07 dB). For theta rhythms (4-8 Hz), we identified an increase of power in insula (3.88 dB) cortex and decrease of power in superior frontal (−5.65 dB), precentral (−9.96 dB), postcentral (−4.50 dB), parietal (−7.75 dB) and temporal structures (−5.83 dB), lingual and pericalcarine (−11.68 dB), as well as the occipital cortices (−20.52 dB) and striatum (−1.85 dB). For slow (0.1-1 Hz) and delta frequencies (1-4 Hz), the decrease in power was observed in most of the structural labels (slow: −1.51 to −3.51 dB, delta: −1.40 to −12.91 dB), except for orbitofrontal, isthmus cingulate, striatum, and insula cortex, which did not showed changes in power after ketamine infusion. Propofol reversed the gamma band iEEG dynamics induced by ketamine in frontal regions and caused a further reduction of occipital alpha oscillation power Adding the propofol (Fig. , Supplementary Table and Supplementary Fig. ) reversed the gamma power (40–55 Hz) increase in anterior and posterior cingulate (−61.26 dB), superior frontal (−68.32 dB), middle frontal (−134.51 dB), orbitofrontal (−52.27 dB), and inferior frontal (−61.86 dB) regions of the brain, as well as the gamma power decrease in the occipital cortex (18.85 dB). In addition, propofol further intensified the gamma power increase at precentral (49.39 dB), postcentral (67.34 dB), isthmus cingulate (9.78 dB), hippocampus and amygdala (20.56 dB). The presence of propofol reversed the alpha power (8-15 Hz) decrease induced by ketamine for most of the structural labels (33.33 to 158.32 dB) except for occipital cortices (−35.20 dB), which showed a further reduction in power after propofol administration. In addition, propofol increased the beta power (15-25 Hz) for nearly all structural labels (22.00 to 214.90 dB). For theta rhythms (4-8 Hz), propofol also increased theta power in most of the structural labels (17.35 to 68.88 dB). The addition of propofol reversed the power reduction induced by ketamine at slow (0.1-1 Hz, 7.31 to 38.66 dB) and delta (1-4 Hz, 10.29 to 92.64 dB) oscillations for all the structural labels. Subanesthetic doses of ketamine induced an increase of 3 Hz oscillation in posteromedial cortex (PMC) We studied the spatial distribution of 3 Hz rhythms after the administration of ketamine and propofol (Fig. , Supplementary Table and Supplementary Fig. ). We identified a dramatic increase of 3-4 Hz oscillatory power after ketamine infusion in posterior (2.05 dB) and isthmus (1.00 dB) cingulate cortex, which are part of the PMC, as well as the pars opercularis (2.90 dB) located within the inferior frontal cortex (Fig. ). We then analyzed the spectrum of the oscillatory activity within PMC by plotting the power differences after ketamine relative to baseline for posterior and isthmus cingulate cortex as a function of the frequency (Fig. ). We found that the increase of iEEG power after ketamine peaked between 3 to 6 Hz. The addition of propofol greatly increased the 3-4 Hz power in most brain regions (6.34 to 24.57 dB), including the posterior and isthmus cingulate cortex, suggesting that the effects of ketamine and propofol on this 3-4 Hz rhythm may be additive rather than antagonistic (Fig. ). We observed distinct dynamic patterns in the iEEG after ketamine infusion, which changed after the administration of propofol. Figure shows the spectrogram and power spectra for 3 channels in the inferior frontal, middle temporal, and occipital cortices from an example subject. The spectrograms for other subjects are shown in Supplementary Fig. . Under ketamine, we observed increased gamma power (25–55 Hz) in the inferior frontal channel and decreased alpha power (8–15 Hz) in the middle temporal and occipital channels. After propofol was added, there was a large increase of power in the inferior frontal and middle temporal channels for nearly all frequencies, except for upper gamma band (40–55 Hz). In contrast, the reduction of alpha oscillations in the occipital channels was further enhanced with the addition of propofol. These results suggest that the iEEG dynamics induced by ketamine and propofol are location- and frequency-dependent. To understand how these brain dynamics mapped to different brain structures, we analyzed the changes in power for different cortical and subcortical structures, first after ketamine infusion and then after the addition of propofol. We analyzed the changes in iEEG dynamics for different brain structures after ketamine infusion (Fig. , Supplementary Table and Supplementary Fig. ). For gamma frequencies (25-55 Hz), a greater than 100 dB increase in mean power after ketamine infusion compared with baseline was detected in frontal structures, which include the anterior and posterior cingulate (159.04 dB), superior frontal (153.03 dB), middle frontal (153.59 dB), orbitofrontal (133.68 dB), and inferior frontal (149.20 dB) areas. The mean power increase in precentral, postcentral, isthmus cingulate, temporal structures, lingual, pericalcarine, hippocampus, amygdala, striatum, and insula, was between 19.07 and 96.37 dB. A decrease in mean gamma power was detected in occipital channels (−42.96 dB). For beta frequencies (15-25 Hz), while an increase in power was detected in hippocampus and amygdala (4.43 dB), a decrease in power was detected for middle frontal (−14.26 dB), precentral (−18.21 dB), postcentral (−36.00 dB), isthmus cingulate (−10.53 dB), parietal (−15.90 dB) and temporal structures (−8.40 dB), as well as the lingual and pericalcarine (−19.58 dB), and the occipital cortices (−43.81 dB). No other structural labels showed changes in power after ketamine infusion (i.e., confidence intervals overlapped zero). For alpha frequencies (8-15 Hz), the decrease of mean alpha power was observed for nearly all structure labels with the largest reduction in postcentral (−33.55 dB) and occipital cortices (−32.07 dB). For theta rhythms (4-8 Hz), we identified an increase of power in insula (3.88 dB) cortex and decrease of power in superior frontal (−5.65 dB), precentral (−9.96 dB), postcentral (−4.50 dB), parietal (−7.75 dB) and temporal structures (−5.83 dB), lingual and pericalcarine (−11.68 dB), as well as the occipital cortices (−20.52 dB) and striatum (−1.85 dB). For slow (0.1-1 Hz) and delta frequencies (1-4 Hz), the decrease in power was observed in most of the structural labels (slow: −1.51 to −3.51 dB, delta: −1.40 to −12.91 dB), except for orbitofrontal, isthmus cingulate, striatum, and insula cortex, which did not showed changes in power after ketamine infusion. Adding the propofol (Fig. , Supplementary Table and Supplementary Fig. ) reversed the gamma power (40–55 Hz) increase in anterior and posterior cingulate (−61.26 dB), superior frontal (−68.32 dB), middle frontal (−134.51 dB), orbitofrontal (−52.27 dB), and inferior frontal (−61.86 dB) regions of the brain, as well as the gamma power decrease in the occipital cortex (18.85 dB). In addition, propofol further intensified the gamma power increase at precentral (49.39 dB), postcentral (67.34 dB), isthmus cingulate (9.78 dB), hippocampus and amygdala (20.56 dB). The presence of propofol reversed the alpha power (8-15 Hz) decrease induced by ketamine for most of the structural labels (33.33 to 158.32 dB) except for occipital cortices (−35.20 dB), which showed a further reduction in power after propofol administration. In addition, propofol increased the beta power (15-25 Hz) for nearly all structural labels (22.00 to 214.90 dB). For theta rhythms (4-8 Hz), propofol also increased theta power in most of the structural labels (17.35 to 68.88 dB). The addition of propofol reversed the power reduction induced by ketamine at slow (0.1-1 Hz, 7.31 to 38.66 dB) and delta (1-4 Hz, 10.29 to 92.64 dB) oscillations for all the structural labels. We studied the spatial distribution of 3 Hz rhythms after the administration of ketamine and propofol (Fig. , Supplementary Table and Supplementary Fig. ). We identified a dramatic increase of 3-4 Hz oscillatory power after ketamine infusion in posterior (2.05 dB) and isthmus (1.00 dB) cingulate cortex, which are part of the PMC, as well as the pars opercularis (2.90 dB) located within the inferior frontal cortex (Fig. ). We then analyzed the spectrum of the oscillatory activity within PMC by plotting the power differences after ketamine relative to baseline for posterior and isthmus cingulate cortex as a function of the frequency (Fig. ). We found that the increase of iEEG power after ketamine peaked between 3 to 6 Hz. The addition of propofol greatly increased the 3-4 Hz power in most brain regions (6.34 to 24.57 dB), including the posterior and isthmus cingulate cortex, suggesting that the effects of ketamine and propofol on this 3-4 Hz rhythm may be additive rather than antagonistic (Fig. ). In this study, we show, in humans, a detailed description of the principal oscillatory changes in cortical and subcortical structures after the administration of a subanesthetic dose of ketamine. Using intraoperative recordings from intracranial electrodes in 10 patients with epilepsy, we found that ketamine increased gamma oscillations within prefrontal cortical areas and the hippocampus—structures previously implicated in ketamine’s antidepressant effects . Furthermore, our studies provide direct evidence of a ketamine-induced 3 Hz oscillation in posteromedial cortex that has been proposed as a mechanism for its dissociative effects . By analyzing changes in neural oscillations after the addition of propofol in 7 out of 10 subjects, we were also able to identify putative NMDA-mediated brain dynamics that could be antagonized by propofol’s GABAergic activity, as well as possible HCN1-mediated effects where both drugs showed an additive effect. Overall, our results suggest that ketamine engages different neural circuits in distinct frequency-dependent patterns of activity to produce its antidepressant and dissociative sensory effects. These insights may help guide the development of brain dynamic biomarkers and novel therapeutics for depression. For gamma frequencies (25–55 Hz), we observed a remarkable increase in power in frontal and limbic structures that are consistent with previous reports employing non-invasive EEG in humans under both subanesthetic and anesthetic doses of ketamine , – , . We found that the gamma band activity was reversed after the subsequent addition of propofol in prefrontal cortical structures. We propose that the ketamine-induced gamma power increase and its subsequent reversal by propofol could be explained by an antagonist mechanism (Fig. , top panel ). Ketamine preferentially blocks the NMDA receptors on GABAergic inhibitory interneurons, resulting in disinhibition of the downstream excitatory pyramidal neurons, which mediates the increased gamma-band activity – . When propofol, a GABA agonist, is administered alongside ketamine, it antagonizes the gamma power increase by restoring some of the inhibitory activity in the prefrontal cortex. The increase in gamma spectral power anteriorly following subanesthetic ketamine infusion may reflect a shift of brain activity from a globally balanced state to a disorganized and autonomous state . The changes in gamma band activity in sensory cortices may contribute to the discoordination of higher-order functional networks and perceptual distortions produced by subanesthetic doses of ketamine , , . In contrast, for alpha frequencies (8–15 Hz), we detected a large reduction in iEEG power after ketamine infusion for all brain regions studied, with the largest reductions occurring in posterior sensory cortices. When propofol was subsequently administered, the reduction in alpha power was reversed in most brain regions, suggesting a similar NMDA-dependent mechanism as described above for gamma activity. However, in posterior sensory structures (lingual, pericalcarine and occipital cortices), the addition of propofol further attenuated alpha power. We attribute this additive behavior to ketamine and propofol’s shared inhibition of HCN1 channels (Fig. , middle panel ). HCN1 channels have been identified as an important molecular target for ketamine’s action . Knockout of HCN1 channels abolishes the ketamine-induced loss-of-right reflex, a behavioral correlate of unconsciousness in rodents . Propofol also inhibits HCN1 channels and the HCN1 knock-out mice are known to be less sensitive to unconsciousness due to propofol . Modeling studies suggest that reductions in hyperpolarization-activated cationic current ( Ih ) mediated by HCN1 can abolish occipital alpha rhythms by silencing thalamocortical cells . The reduction of alpha power in occipital regions is also observed during anesthetic doses of ketamine , , propofol-induced unconsciousness , as well as sleep , , suggesting the loss of occipital alpha rhythms may be a hallmark for disrupted sensory processing in different states of altered arousal . We found that subanesthetic doses of ketamine induced a 3 Hz oscillation in PMC in humans, consistent with previous studies in mice after administration of ketamine and in an epileptic patient during a pre-seizure aura as well as in response to electrical stimulation of epileptic foci . Vesuna, et al., 2020, showed that there are NMDA receptors and HCN1 channels in the homologous deep retrosplenial (RSP) cortex in mice, both of which are required for generating the observed 3 Hz rhythmic activity . Knockout of HCN1 channels abolished ketamine-induced rhythms in RSP and the dissociation-related behavior in mice, whereas optogenetic inhibition of long-range inputs to the RSP enhanced ketamine-induced oscillations . Vesuna et al., proposed that ketamine blockade of NMDA receptors could hyperpolarize membrane potentials in PMC, activating intrinsic HCN1 channels and permitting rhythmic dynamics. We propose that the same effect could occur with propofol by way of a GABA-mediated hyperpolarization (Fig. , bottom panel ). Although both ketamine and propofol-induced 3 Hz rhythms in PMC, dissociation was only detected after ketamine. This may be because propofol suppresses arousal and induces unconsciousness, which would supersede any perceived dissociative effects. Besides its dissociative effects, subanesthetic ketamine has been shown to have a powerful antidepressant effect. The oscillatory circuit dynamics produced by ketamine may be related to this antidepressant effect. Subjects with a history of depression have been observed to have higher amplitude delta and theta oscillations compared to controls during a working memory task . Consistent with this observation, we found that ketamine reduces delta and theta oscillation power. Patients with depression have also been reported to have increased activity in alpha, beta, and theta bands at the occipital and parietal regions of the brain . Accordingly, we identified a global reduction of power at theta, alpha and beta frequencies, with the largest reduction in occipital and parietal regions after ketamine infusion. Gamma oscillations have also been discussed as a potential biomarker for depression. Changes in gamma rhythms can vary according to behavioral states and task conditions, but there are a few studies suggesting that reduced gamma power is associated with depression. One EEG study found that subjects with high depression scores had reduced resting gamma power in the anterior cingulate cortex . Another MEG study showed that depressed subjects with lower baseline gamma and higher ketamine-induced gamma had a better response to ketamine than those with higher baseline gamma . It has also known that the prefrontal cortex and hippocampus are implicated in ketamine’s antidepressant response . The dramatic increase in gamma rhythms we identified in those brain regions with subanesthetic doses of ketamine are consistent with previous studies. In this study, although we did not directly measure clinical depression nor antidepressant effects, we inferred that our results could be related to ketamine’s antidepressant effects, based on the neuroanatomy of the brain oscillations we identified and prior literature that showed associations among depression, brain dynamics, and functional neuroanatomy. Future studies investigating brain dynamics after ketamine infusion in depressed patients are needed. In this study, we focused primarily on the role of NMDA receptors, which appear to play a central role in mediating ketamine’s effects on brain dynamics as well as its antidepressant effects . The role of other receptors such as AMPA , that have been suggested to play an important role in ketamine’s antidepressant effects should also be investigated in the future. In follow-up studies it would also be interesting to explore the relationship between EEG oscillatory dynamics and the intensity level of dissociation, which could not be addressed in the current study due to our limited sample size and the limited resolution of dissociation assessment. Cross-frequency coupling analysis could be an additional topic of interest for characterizing the interactions between oscillations at different frequency bands. Our results also show how the combination of ketamine and propofol could contribute to unconsciousness through a shared mechanism, providing an explanation for why propofol and ketamine appear to work synergistically to maintain unconsciousness when administered during general anesthesia . Overall, we find that ketamine has distinct dynamic effects on neural systems known to mediate cognition, depression, and sensory processing by way of multiple dissociable neuropharmacological mechanisms. The neural circuit mechanisms underlying ketamine-induced oscillatory dynamics, and their potential links to antidepressive and dissociative effects as proposed in this study, may have important implications for the development of novel therapies with fewer side effects and greater safety. Subject recruitment Patients with medication-refractory epilepsy implanted with intracranial depth electrodes to locate their seizure onset zone were recruited from Massachusetts General Hospital and Brigham and Women’s Hospital. Electrode placement was determined by the clinical team independent of this study. Ten patients (five male and five female) aged 22 to 59 years old were recruited. Subjects’ demographic and electrode information are summarized in Table . This study was approved by the Institutional Review Board (IRB) covering the two hospitals (Mass General Brigham Human Research Committee). Informed consent was obtained from all subjects prior to the study. Experimental procedure All experiments were conducted during stereotactic neurosurgery for removal of the intracranial depth electrodes in the operating room at the Massachusetts General Hospital or the Brigham and Women’s Hospital. Participants were implanted with multi-lead depth electrodes (a.k.a. stereotactic EEG, sEEG) to confirm the hypothesized seizure focus, and located epileptogenic tissue in relation to essential cortex, thus directing surgical treatment. Depth electrodes (Ad-tech Medical, Racine WI, USA, or PMT, Chanhassen, MN, USA) with diameters of 0.8–1.0 mm and consisting of 8–16 platinum/iridium-contacts 1–2.4 mm long were stereotactically placed in locations deemed necessary for seizure localization by a multidisciplinary clinical team. The first period was a baseline recording of 5 min (Fig. ). The second period consisted of 14 min with continuous infusion of subanesthetic level of ketamine (total dose of 0.5 mg/kg over 14 min, Supplementary Fig. shows pharmacokinetic effects of different ketamine delivery schemes). At the end of ketamine infusion, a clinical research staff member administered the abbreviated CADSS questionnaire (Supplementary Fig. ) to the patients – . Because of limited time in the operating room, patients only answered yes or no to the questions. Immediately after the questionnaire, propofol bolus was given to the patients to induce general anesthesia. During the whole process, subjects were instructed to close their eyes to avoid eye-blink artifacts in the signal. Supplementary Fig. shows oxygen saturation (SpO 2 ), mean arterial pressure (MAP), pulse, and end-tidal CO 2 for the study period. iEEG signals were recorded using a Blackrock Cerebus system (Blackrock Microsystems) sampled at 2,000 Hz. Before each study, structural MRI scans were acquired for each subject (Siemens Trio 3 Tesla, T1-weighted magnetization-prepared rapid gradient echo, 1.3-mm slice thickness, 1.3 × 1 mm in-plane resolution, TR/TE = 2530/3.3 ms, 7° flip angle). iEEG preprocessing, power spectral analysis and statistical analysis Data analysis was performed using custom analysis code in MATLAB (R2021a). Raw iEEG data were notch filtered at 60 Hz and its harmonics, downsampled to 500 Hz, and detrended across the entire recording. The signals were then visually inspected, and channels with noise or artifacts were removed. Data were re-referenced with a bipolar montage. A total of 824 bipolar channels were generated for 10 subjects received ketamine, and 606 bipolar channels were generated for 7 subjects received propofol (Supplementary Fig. and Supplementary ). Spectral analysis was performed using the multitaper method, with window lengths of T = 2 sec with 0.5 sec overlap, time-bandwidth product TW = 3, number of tapers K = 5, and spectral resolution of 3 Hz , . The mean power spectral density for baseline, ketamine and propofol conditions were calculated by taking the average across each period. The power spectral density was converted to decibels (dB) to facilitate easier comparisons. The differences of power after ketamine infusion relative to baseline, and propofol relative to ketamine periods were calculated by subtracting the mean power spectral density in dB between each of the two conditions at different frequencies (slow: 0.1–1 Hz, delta: 1–4 Hz, theta: 4–8 Hz, alpha: 8–15 Hz, beta: 15–25 Hz, gamma: 25–55 Hz, low gamma: 25–40 Hz, upper gamma: 40–55 Hz). Our primary objective was to describe changes in iEEG power by reporting effect sizes and confidence intervals for changes in iEEG power in the indicated brain regions of interest (ROIs) after drug administration. We did not report p-values and thus did not correct for multiple comparisons. The bootstrap method was used to generate the 95% confidence interval around the mean differences in power for each structural label at each frequency using data from all subjects who had electrodes located within each structural label. The upper and lower bars represent the bootstrapped 95% confidence interval bounds. Structural parcellation of the brain The electrode positions in each subject’s brain were obtained by aligning the preoperative T1-weighted MRI with a postoperative CT/MRI using the Freesurfer (7.2) image analysis tool , . To identify the structural label and functional network for each of the electrodes, an electrode labeling algorithm (ELA) was employed . This algorithm estimated the probability of overlap of an expanding area around each electrode with brain structural labels that had been identified in the Desikan-Killiany-Tourville (DKT) 40 atlas using purely anatomical approaches – . Then the ELA used gradient descent to find the closest voxel in the template’s brain that gives similar regions and probabilities to transform the patients’ electrode coordinates to the template brain – . Based on DKT 40 atlas, we assigned the 824 electrodes from 10 subjects received ketamine to 49 structural labels, which were then further classified into 15 labels according to the anatomical locations and the mean differences of power after ketamine relative to the baseline condition. Likewise, we assigned the 606 electrodes collected from 7 subjects received propofol to 14 structural labels. We plotted all electrodes on Colin 27 template brain with colors per parcellated brain region indicating the differences in power for the ketamine infusion period relative to baseline, as well as for propofol bolus relative to the ketamine infusion period for each of the frequencies. Reporting summary Further information on research design is available in the linked to this article. Patients with medication-refractory epilepsy implanted with intracranial depth electrodes to locate their seizure onset zone were recruited from Massachusetts General Hospital and Brigham and Women’s Hospital. Electrode placement was determined by the clinical team independent of this study. Ten patients (five male and five female) aged 22 to 59 years old were recruited. Subjects’ demographic and electrode information are summarized in Table . This study was approved by the Institutional Review Board (IRB) covering the two hospitals (Mass General Brigham Human Research Committee). Informed consent was obtained from all subjects prior to the study. All experiments were conducted during stereotactic neurosurgery for removal of the intracranial depth electrodes in the operating room at the Massachusetts General Hospital or the Brigham and Women’s Hospital. Participants were implanted with multi-lead depth electrodes (a.k.a. stereotactic EEG, sEEG) to confirm the hypothesized seizure focus, and located epileptogenic tissue in relation to essential cortex, thus directing surgical treatment. Depth electrodes (Ad-tech Medical, Racine WI, USA, or PMT, Chanhassen, MN, USA) with diameters of 0.8–1.0 mm and consisting of 8–16 platinum/iridium-contacts 1–2.4 mm long were stereotactically placed in locations deemed necessary for seizure localization by a multidisciplinary clinical team. The first period was a baseline recording of 5 min (Fig. ). The second period consisted of 14 min with continuous infusion of subanesthetic level of ketamine (total dose of 0.5 mg/kg over 14 min, Supplementary Fig. shows pharmacokinetic effects of different ketamine delivery schemes). At the end of ketamine infusion, a clinical research staff member administered the abbreviated CADSS questionnaire (Supplementary Fig. ) to the patients – . Because of limited time in the operating room, patients only answered yes or no to the questions. Immediately after the questionnaire, propofol bolus was given to the patients to induce general anesthesia. During the whole process, subjects were instructed to close their eyes to avoid eye-blink artifacts in the signal. Supplementary Fig. shows oxygen saturation (SpO 2 ), mean arterial pressure (MAP), pulse, and end-tidal CO 2 for the study period. iEEG signals were recorded using a Blackrock Cerebus system (Blackrock Microsystems) sampled at 2,000 Hz. Before each study, structural MRI scans were acquired for each subject (Siemens Trio 3 Tesla, T1-weighted magnetization-prepared rapid gradient echo, 1.3-mm slice thickness, 1.3 × 1 mm in-plane resolution, TR/TE = 2530/3.3 ms, 7° flip angle). Data analysis was performed using custom analysis code in MATLAB (R2021a). Raw iEEG data were notch filtered at 60 Hz and its harmonics, downsampled to 500 Hz, and detrended across the entire recording. The signals were then visually inspected, and channels with noise or artifacts were removed. Data were re-referenced with a bipolar montage. A total of 824 bipolar channels were generated for 10 subjects received ketamine, and 606 bipolar channels were generated for 7 subjects received propofol (Supplementary Fig. and Supplementary ). Spectral analysis was performed using the multitaper method, with window lengths of T = 2 sec with 0.5 sec overlap, time-bandwidth product TW = 3, number of tapers K = 5, and spectral resolution of 3 Hz , . The mean power spectral density for baseline, ketamine and propofol conditions were calculated by taking the average across each period. The power spectral density was converted to decibels (dB) to facilitate easier comparisons. The differences of power after ketamine infusion relative to baseline, and propofol relative to ketamine periods were calculated by subtracting the mean power spectral density in dB between each of the two conditions at different frequencies (slow: 0.1–1 Hz, delta: 1–4 Hz, theta: 4–8 Hz, alpha: 8–15 Hz, beta: 15–25 Hz, gamma: 25–55 Hz, low gamma: 25–40 Hz, upper gamma: 40–55 Hz). Our primary objective was to describe changes in iEEG power by reporting effect sizes and confidence intervals for changes in iEEG power in the indicated brain regions of interest (ROIs) after drug administration. We did not report p-values and thus did not correct for multiple comparisons. The bootstrap method was used to generate the 95% confidence interval around the mean differences in power for each structural label at each frequency using data from all subjects who had electrodes located within each structural label. The upper and lower bars represent the bootstrapped 95% confidence interval bounds. The electrode positions in each subject’s brain were obtained by aligning the preoperative T1-weighted MRI with a postoperative CT/MRI using the Freesurfer (7.2) image analysis tool , . To identify the structural label and functional network for each of the electrodes, an electrode labeling algorithm (ELA) was employed . This algorithm estimated the probability of overlap of an expanding area around each electrode with brain structural labels that had been identified in the Desikan-Killiany-Tourville (DKT) 40 atlas using purely anatomical approaches – . Then the ELA used gradient descent to find the closest voxel in the template’s brain that gives similar regions and probabilities to transform the patients’ electrode coordinates to the template brain – . Based on DKT 40 atlas, we assigned the 824 electrodes from 10 subjects received ketamine to 49 structural labels, which were then further classified into 15 labels according to the anatomical locations and the mean differences of power after ketamine relative to the baseline condition. Likewise, we assigned the 606 electrodes collected from 7 subjects received propofol to 14 structural labels. We plotted all electrodes on Colin 27 template brain with colors per parcellated brain region indicating the differences in power for the ketamine infusion period relative to baseline, as well as for propofol bolus relative to the ketamine infusion period for each of the frequencies. Further information on research design is available in the linked to this article. Supplementary Information Peer Review File Description of Additional Supplementary Files Supplementary Movie 1 Reporting Summary |
Sharing neurophysiology data from the Allen Brain Observatory | f2a15b24-9e36-49ac-90a4-ba9ed50576b8 | 10335829 | Physiology[mh] | Why share data? The central nervous system is among the most complex organs under investigation. Accordingly, the tools to study it have become intricate and costly, generating ever-growing torrents of data that need to be ingested, quality-controlled, and curated for subsequent analysis. Not every lab has the financial or personnel resources to accomplish this. Moreover, while many scientists relish running experiments, others find their passion in analysis. Data collection requires a different skillset than analysis, especially as the field demands more comprehensive and higher-dimensional datasets, which, in turn, necessitate more advanced analytical methods and software infrastructure. A scientific ecosystem in which data is extensively shared and reused would give researchers more freedom to focus on their favorite parts of the discovery process. Sharing data brings other benefits as well. It increases the number of eyes on each dataset, making it easier to spot potential outlier effects . It encourages meta-analyses that integrate data from multiple studies, providing the opportunity to reconcile apparently contradicting results or expose the biases inherent in specific analysis pipelines . It also gives researchers a chance to test hypotheses on existing data, refining and updating their ideas before embarking on the more costly process of running new experiments. Without a doubt, reanalysis of neurophysiology data has already facilitated numerous advances. Electrophysiological recordings from nonhuman primates, which require tremendous dedication to collect, are often reused in multiple high-impact publications . Data from ‘calibration’ experiments, in which activity of individual neurons is monitored via two modalities at once, have been extremely valuable for improving data processing algorithms . A number of these datasets have been shared via the website of CRCNS , far-sighted organization focused on aggregating data for computational neuroscience within the same searchable database. To date, CRCNS hosts 150 datasets, including extensive neurophysiology recordings from a variety of species, as well as fMRI, EEG, and eye movement datasets. This is especially impressive given that CRCNS was launched by a single lab in 2008. The repository does not enforce formatting standards, and thus each dataset differs in its packaging conventions, as well as what level of preprocessing may have been applied to the data. The website includes a list of 111 publications and preprints based on CRCNS data. Our own meta-analysis of these articles shows that 28 out of 150 datasets have been reused at least once, with four reused more than 10 times each. More recently, an increasing number of researchers are choosing to make data public via generalist repositories such as Figshare , Dryad , and Zenodo , or the neuroscience-specific G-Node Infrastructure . In addition, the lab of György Buzsáki maintains a databank of recordings from more than 1000 sessions from freely moving rodents . As data can be hosted on these repositories for free, they greatly lower the barriers to sharing. However, the same features that reduce the barriers for sharing can also increase the barriers for reuse. With no restrictions on the data format or level of documentation, learning how to analyze diverse open datasets can take substantial effort, and scientists are limited in their ability to perform meta-analyses across datasets. Further, with limited and nonstandard documentation, finding relevant datasets can be challenging. Since its founding, the Allen Institute has made open data one of its core principles. Specifically, it has become known for generating and sharing survey datasets within the field of neuroscience, taking inspiration from domains such as astronomy where such surveys are common. (As a community, astronomers have developed a far more comprehensive and coherent data infrastructure than biology. One obvious reason is the existence of a single sky with an agreed-upon coordinate system and associated standards such as the Flexible Image Transport System; ; ; .) The original Allen Mouse Brain Atlas and subsequent surveys of gene expression , mesoscale connectivity , and in vitro firing patterns have become essential resources across the field. These survey datasets are (1) collected in a highly standardized manner with stringent quality controls, (2) create a volume of data that is much larger than typical individual studies within their particular disciplines, and (3) are collected without a specific hypothesis to facilitate a diverse range of use cases. Starting a decade ago, we began planning the first surveys of in vivo physiology in mouse cortex with single-cell resolution . Whereas gene expression and connectivity are expected to change relatively slowly, neural responses in awake subjects can vary dramatically from moment to moment, even during apparently quiescent periods . Therefore, an in vivo survey of neural activity poses new challenges, requiring many trials and sessions to account for both intra- as well as inter-subject variability. We first used two-photon calcium imaging and later Neuropixels electrophysiology to record spontaneous and evoked activity in visual cortex and thalamus of awake mice that were passively exposed to a wide range of visual stimuli (known as ‘Visual Coding’ experiments). A large number of subjects, highly standardized procedures, and rigorous quality control criteria distinguished these surveys from typical small-scale neurophysiology studies. More recently, the Institute carried out surveys of single-cell activity in mice performing a visually guided behavioral task (known as ‘Visual Behavior’ experiments). In all cases, the data was shared even before we published our own analyses of them. We reflect here on the lessons learned concerning the challenges of data sharing and reuse in the neurophysiology space. Our primary takeaway is that the widespread mining of our publicly available resources demonstrates a clear community demand for open neurophysiology data and points to a future in which data reuse becomes more commonplace. However, more work is needed to make data sharing and reuse practical (and ideally the default) for all laboratories practicing systems neuroscience. The Allen Brain Observatory consists of a set of standardized instruments and protocols designed to carry out surveys of cellular-scale neurophysiology in awake brains . Our initial focus was on neuronal activity in the mouse visual cortex . Vision is the most widely studied sensory modality in mammals, but much of the foundational work is based on recordings with hand-tuned stimuli optimized for individual neurons, typically investigating a single area at a time . The field has lacked the sort of unbiased, large-scale surveys required to rigorously test theoretical models of visual function . The laboratory mouse is an advantageous model animal given the extensive ongoing work on mouse cell types , as well as access to a well-established suite of genetic tools for observing and manipulating neural activity via driver and reporter lines or viruses . Our two-photon calcium imaging dataset leveraged transgenic lines to drive the expression of a genetically encoded calcium indicator in specific populations of excitatory neurons (often constrained to a specific cortical layer) or GABAergic interneurons. In total, we recorded activity from over 63,000 neurons across 6 cortical areas, 4 cortical layers, and 14 transgenic lines . The Neuropixels electrophysiology dataset used silicon probes to record simultaneously from the same six cortical areas targeted in the two-photon dataset, as well as additional subcortical regions . While cell type specificity was largely lost, transgenic lines did enable optotagging of specific inhibitory interneurons. The Neuropixels dataset included recordings from over 40,000 units passing quality control across more than 14 brain regions and 4 mouse lines . In both surveys, mice were passively exposed to a range of visual stimuli. These included drifting and flashed sinusoidal gratings to measure traditional spatial and temporal tuning properties, sparse noise or windowed gratings to map spatial receptive fields, images and movies that have natural spatial and temporal statistics, and epochs of mean luminance to capture neurons’ spontaneous activity. These stimuli were selected to provide a broad survey of visual physiological activity and compare the organization of visual responses across brain regions and cell types. Mice were awake during these experiments and head-fixed on a spinning disk that permitted them to run in a self-initiated and unguided manner. Subsequent surveys of neural activity in mice performing a behavioral task are not discussed here as it is too soon to begin evaluating their impact on the field. Once the data was collected, we wanted to minimize the friction required for external groups to access it and mine it for insights. This is challenging! Providing unfettered access to the data can be accomplished by providing a simple download link; yet, unless the user understands what is contained in the file and has installed the appropriate libraries for parsing the data, its usefulness is limited. At the other extreme, a web-based analysis interface that does not require any downloading or installation can facilitate easy data exploration, but this approach has high upfront development costs and imposes limitations on the analyses that can be carried out. These conflicting demands are apparent in our custom tool, the AllenSDK, a Python package that serves as the primary interface for downloading data from these surveys as well as other Allen Institute resources. In the case of the Allen Brain Observatory, the AllenSDK provides wrapper functions for interacting with the Neurodata Without Borders (NWB) files in which the data is stored. Intuitive functions enable users to search metadata for specific experimental sessions and extract the relevant data assets. Whereas our two-photon calcium imaging survey was accompanied by a dedicated web interface that displayed summary plots for every cell and experiment ( observatory.brain-map.org/visualcoding ), we discontinued this practice because of its associated development costs and because most users preferred to directly access the data in their own analysis environment. One challenge with sharing cellular neurophysiology data is that it includes multiple high-dimensional data streams. Many other data modalities (e.g., gene expression) can be reduced to a derived metric and easily shared in a tabular format (e.g., cell-by-gene table). In contrast, neurophysiological data is highly varied, with researchers taking different approaches to both data processing (e.g., spike sorting or cell segmentation) and analysis. While these data can be analyzed as a large collection of single-cell recordings, they can also be approached as population recording, leveraging the fact that hundreds to thousands of neurons are recorded simultaneously. Thus, particularly for a survey-style dataset not designed to test a particular hypothesis, it is hard to reduce these recordings to a simple set of derived metrics that encapsulate the full range of neural and behavioral states. Even when it is possible (e.g., we could have shared a table of single-cell receptive field and tuning properties as the end product), this confines any downstream analyses to those specific metrics, severely undermining the space of possible use cases. At the same time, if we had only shared the raw data, few researchers would have the resources or the inclination to build their own preprocessing and packaging pipelines. Therefore, we aimed to share our data in a flexible way to facilitate diverse use cases. For every session, we provided either spike times or fluorescence traces, temporally aligned stimulus information, the mouse’s running speed and pupil tracking data, as well as intermediate, derived data constructs, such as ROI masks, neuropil traces, and pre- and post- de-mixing traces for two-photon microscopy, and waveforms across channels for Neuropixels. All are contained within the NWB files. In addition, we uploaded the more cumbersome, terabyte-scale raw imaging movies and voltage traces to the public cloud for users focused on data processing algorithms . The first round of two-photon calcium imaging data was released in July 2016, followed by three subsequent releases that expanded the dataset (green triangles in ). The Neuropixels dataset became available in October 2019 (yellow triangle in ). At the end of 2022, there were 104 publications or preprints that reuse these two datasets, with first authors at 50 unique institutions. This demonstrates the broad appeal of applying a survey-style approach to the domain of in vivo neurophysiology. We found three general use cases of Allen Brain Observatory data in the research community: Generating novel discoveries about brain function Validating new computational models and algorithms Comparing with experiments performed outside the Allen Institute Below, we highlight some examples of these three use cases, for both the two-photon calcium imaging and Neuropixels datasets. All these studies were carried out by groups external to the Allen Institute, and frequently without any interaction from us, speaking to the ease with which data can be downloaded and analyzed. Making discoveries used Allen Brain Observatory two-photon imaging data to explore the stability of neural responses over time. They previously found that neurons in a recurrent network model with high inherent plasticity had more variability in their stimulus selectivity than those with low plasticity. They also found that neurons with high inherent plasticity have higher population coupling. To examine whether these were related, they here analyzed real calcium-dependent fluorescence traces from the Allen Brain Observatory to examine whether population coupling and response variability were correlated. The authors found that, indeed, population coupling is correlated with the change in orientation and direction tuning of neurons over the course of a single experiment, an unexpected result linking population activity with individual neural responses. examined whether a deep artificial neural network (ANN) could model both the ventral and dorsal pathways of the visual system in a single network with a single cost function. They trained two networks, one with a single pathway and the other with two parallel pathways, using a Contrastive Predictive Coding loss function. Comparing the representations of these networks with the neural responses in the two-photon imaging dataset, they found that the single pathway produced ventral-like representations but failed to capture the representational similarity of the dorsal areas. The parallel pathway network, though, induced distinct representations that mapped onto the ventral/dorsal division. This work is an illustration of how large-scale data can guide the development of neural network modeling, and, conversely, how those approaches can inform our understanding of cortical function. analyzed the time course of stimulus-specific adaptation in 2365 neurons in the Neuropixels dataset and discovered that a single presentation of a drifting or static grating in a specific orientation leads to a reduction in the response to the same visual stimulus up to eight trials (22 s) in the future. This stimulus-specific, long-term adaptation persists despite intervening stimuli, and is seen in all six visual cortical areas, but not in visual thalamic areas (LGN and LP), which returned to baseline after one or two trials. This is a remarkable example of a discovery that was not envisioned when designing our survey, but for which our stimulus set was well suited. At least three publications have taken advantage of the fact that every Neuropixels insertion targeting visual cortex and thalamus also passed through the intervening hippocampus and subiculum. analyzed the local field potential from these electrodes to detect the onset of sharp-wave ripples, fast oscillations believed to mediate offline information transfer out of the hippocampus . They found that sharp-wave ripples coincided with a transient, cortex-wide increase in functional connectivity with the hippocampus. examined the topography of this functional connectivity and found that distinct but intermingled classes of visual cortex neurons were preferentially modulated by ripples originating in dorsal hippocampus, while others were more coupled to ripples in intermediate hippocampus. analyzed the responses of hippocampal neurons to natural movies and found that many displayed highly selective ‘movie fields’ that were often as robust as those of neurons in visual cortex. However, in contrast to visual cortex, the movie fields in the hippocampus disappeared if the movie frames were shuffled (thereby disrupting the learned temporal sequence). Although the Allen Brain Observatory experiments were not originally designed to test hypotheses of hippocampal function, the Neuropixels dataset turned out to be attractive for understanding the interactions between this structure and visual cortical and thalamic regions. Validating models and algorithms Many researchers used the numerous and diverse fluorescence movies in the two-photon imaging dataset to validate image processing algorithms. As the different transgenic lines used in the dataset target different populations of neurons, they have different labeling densities. As a result, there are some very sparse movies with only a dozen neurons within the field of view and others with up to ~400 neurons. This makes the dataset a rich resource for benchmarking methods for cell segmentation , matching neurons across multiple sessions , and removing false transients in the fluorescence traces . used the Neuropixels survey to showcase a novel method for identifying statistically significant changes in neural activity. Their method, called ZETA (Zenith of Event-based Time-locked Anomalies), detects whether a cell is responsive to stimulation without the need to tune parameters, such as spike bin size. As an example, they analyze the ‘optotagging’ portion of the Neuropixels experiments carried out in Vip-Cre × ChR2 mice, involving the activation of Vip+ interneurons with brief pulses of blue light. Intended to aid in the identification of genetically defined cell types at the end of each recording session, the authors show how these recordings can be exploited to test the network-level impact of triggering a particular class of interneurons. ZETA identifies not only Vip+ neurons that are directly activated by the light pulses, but also nearby cortical neurons that are inhibited on short timescales and disinhibited over longer timescales. used raw data from the Neuropixels survey to validate SpikeInterface, a Python package that runs multiple spike sorting algorithms in parallel and compares their outputs. We originally performed spike sorting with one such algorithm, Kilosort 2 . The authors of this paper used SpikeInterface to compare the performance of Kilosort 2 and five additional algorithms. In one example session, over 1000 distinct units were detected by only one sorter, while only 73 units were detected by five or more sorters. At first glance, this finding seems to indicate a high level of disagreement among the algorithms. However, when comparing these results with those from simulations, it became clear that the low-agreement units were mainly false positives, while the true positive units were highly consistent across algorithms. This finding, and the SpikeInterface package in general, will be essential for improving the accuracy of spike sorting in the future. Comparisons with other datasets used supervised and semi-supervised learning algorithms to classify cortical visual areas based on either spontaneous activity or visually evoked responses. Cortical visual areas, defined based on retinotopic maps, are thought to serve distinct visual processing functions. Rather than compare tuning properties of neurons across the areas, as many studies (including our own) have done, the authors trained classifiers to successfully determine the area membership and boundaries from the neural responses to visual stimuli. They compared the performance of these algorithms for their own wide-field imaging dataset with our two-photon imaging dataset. This provides an extension and validation of their results to conditions in which single-cell responses are available. performed electrophysiological recordings in mouse cortex to examine ‘mismatch’ responses, where neurons respond to differences in visual cue and motor signals from running. The authors argued that these responses derive from visual features rather than the mismatch, showing that these perturbation responses might be explained by preferential tuning to low temporal frequencies. The authors use our two-photon imaging dataset to demonstrate a difference in temporal frequency tuning across cortical layers, with neurons in superficial layers being tuned to lower frequencies, supporting the fact that mismatch responses are predominantly observed in superficial layers. While this use case is perhaps one of the simplest, it is an elegant demonstration of gaining validation for implications that emerge from one’s own experiments. compared spiking activity from the Neuropixels dataset to calcium-dependent fluorescence changes recorded in their laboratory. Their analysis focused on the precision with which the orientation of static gratings can be decoded from activity in visual cortex. Using their own two-photon calcium imaging dataset that consisted of up to 50,000 simultaneously recorded neurons, they found that it was possible to use neural activity to discriminate orientations that differ by less than 0.4°, about a factor of 100 better than reported behavioral thresholds in mice. As an important control, they showed that the trial-to-trial variability in evoked responses to static gratings was nearly identical between their two-photon data and our Neuropixels electrophysiology data, indicating that their main result was not likely to depend on the recording modality. This use case is noteworthy because the preprint containing this comparison appeared less than a month after our dataset became publicly available. directly compared the Allen Neuropixels dataset with Neuropixels recordings from LGN and V1 carried out locally. They first analyzed gamma-band coherence between these two structures in the Allen Brain Observatory dataset and found evidence in support of their hypothesis that inter-regional coherence is primarily driven by afferent inputs. This contrasts with the ‘communication through coherence’ hypothesis , which posits that pre-existing inter-regional coherence is necessary for information transfer. They then performed a separate set of Neuropixels recordings in which they found that silencing cortex (via optogenetic activation of somatostatin-positive interneurons) did not change the degree of coherence between LGN and V1, indicating that V1 phase-locking is inherited from LGN, further supporting their hypothesis. This is an insightful example of how a survey dataset can be used to test a hypothesis, followed by a set of more specific follow-up experiments that refine the initial findings. Use in education These surveys have also been used in a variety of educational contexts. Many computational neuroscience summer courses have presented them as potential source of student projects. This includes the Allen Institute’s own Summer Workshop on the Dynamic Brain as well as the Cold Spring Harbor Neural Data Science and Computational Neuroscience: Vision courses; Brains, Minds, and Machines Summer Course at the Marine Biological Laboratory; and the Human Brain Project Education Program. Indeed, in some cases these projects have led to publications . Beyond these summer courses, these datasets are discussed in undergraduate classrooms, enabling students to learn computational methods with real data rather than toy models. This includes classes at the University of Washington, Brown University, and the University of California, San Diego. used Allen Brain Observatory two-photon imaging data to explore the stability of neural responses over time. They previously found that neurons in a recurrent network model with high inherent plasticity had more variability in their stimulus selectivity than those with low plasticity. They also found that neurons with high inherent plasticity have higher population coupling. To examine whether these were related, they here analyzed real calcium-dependent fluorescence traces from the Allen Brain Observatory to examine whether population coupling and response variability were correlated. The authors found that, indeed, population coupling is correlated with the change in orientation and direction tuning of neurons over the course of a single experiment, an unexpected result linking population activity with individual neural responses. examined whether a deep artificial neural network (ANN) could model both the ventral and dorsal pathways of the visual system in a single network with a single cost function. They trained two networks, one with a single pathway and the other with two parallel pathways, using a Contrastive Predictive Coding loss function. Comparing the representations of these networks with the neural responses in the two-photon imaging dataset, they found that the single pathway produced ventral-like representations but failed to capture the representational similarity of the dorsal areas. The parallel pathway network, though, induced distinct representations that mapped onto the ventral/dorsal division. This work is an illustration of how large-scale data can guide the development of neural network modeling, and, conversely, how those approaches can inform our understanding of cortical function. analyzed the time course of stimulus-specific adaptation in 2365 neurons in the Neuropixels dataset and discovered that a single presentation of a drifting or static grating in a specific orientation leads to a reduction in the response to the same visual stimulus up to eight trials (22 s) in the future. This stimulus-specific, long-term adaptation persists despite intervening stimuli, and is seen in all six visual cortical areas, but not in visual thalamic areas (LGN and LP), which returned to baseline after one or two trials. This is a remarkable example of a discovery that was not envisioned when designing our survey, but for which our stimulus set was well suited. At least three publications have taken advantage of the fact that every Neuropixels insertion targeting visual cortex and thalamus also passed through the intervening hippocampus and subiculum. analyzed the local field potential from these electrodes to detect the onset of sharp-wave ripples, fast oscillations believed to mediate offline information transfer out of the hippocampus . They found that sharp-wave ripples coincided with a transient, cortex-wide increase in functional connectivity with the hippocampus. examined the topography of this functional connectivity and found that distinct but intermingled classes of visual cortex neurons were preferentially modulated by ripples originating in dorsal hippocampus, while others were more coupled to ripples in intermediate hippocampus. analyzed the responses of hippocampal neurons to natural movies and found that many displayed highly selective ‘movie fields’ that were often as robust as those of neurons in visual cortex. However, in contrast to visual cortex, the movie fields in the hippocampus disappeared if the movie frames were shuffled (thereby disrupting the learned temporal sequence). Although the Allen Brain Observatory experiments were not originally designed to test hypotheses of hippocampal function, the Neuropixels dataset turned out to be attractive for understanding the interactions between this structure and visual cortical and thalamic regions. Many researchers used the numerous and diverse fluorescence movies in the two-photon imaging dataset to validate image processing algorithms. As the different transgenic lines used in the dataset target different populations of neurons, they have different labeling densities. As a result, there are some very sparse movies with only a dozen neurons within the field of view and others with up to ~400 neurons. This makes the dataset a rich resource for benchmarking methods for cell segmentation , matching neurons across multiple sessions , and removing false transients in the fluorescence traces . used the Neuropixels survey to showcase a novel method for identifying statistically significant changes in neural activity. Their method, called ZETA (Zenith of Event-based Time-locked Anomalies), detects whether a cell is responsive to stimulation without the need to tune parameters, such as spike bin size. As an example, they analyze the ‘optotagging’ portion of the Neuropixels experiments carried out in Vip-Cre × ChR2 mice, involving the activation of Vip+ interneurons with brief pulses of blue light. Intended to aid in the identification of genetically defined cell types at the end of each recording session, the authors show how these recordings can be exploited to test the network-level impact of triggering a particular class of interneurons. ZETA identifies not only Vip+ neurons that are directly activated by the light pulses, but also nearby cortical neurons that are inhibited on short timescales and disinhibited over longer timescales. used raw data from the Neuropixels survey to validate SpikeInterface, a Python package that runs multiple spike sorting algorithms in parallel and compares their outputs. We originally performed spike sorting with one such algorithm, Kilosort 2 . The authors of this paper used SpikeInterface to compare the performance of Kilosort 2 and five additional algorithms. In one example session, over 1000 distinct units were detected by only one sorter, while only 73 units were detected by five or more sorters. At first glance, this finding seems to indicate a high level of disagreement among the algorithms. However, when comparing these results with those from simulations, it became clear that the low-agreement units were mainly false positives, while the true positive units were highly consistent across algorithms. This finding, and the SpikeInterface package in general, will be essential for improving the accuracy of spike sorting in the future. used supervised and semi-supervised learning algorithms to classify cortical visual areas based on either spontaneous activity or visually evoked responses. Cortical visual areas, defined based on retinotopic maps, are thought to serve distinct visual processing functions. Rather than compare tuning properties of neurons across the areas, as many studies (including our own) have done, the authors trained classifiers to successfully determine the area membership and boundaries from the neural responses to visual stimuli. They compared the performance of these algorithms for their own wide-field imaging dataset with our two-photon imaging dataset. This provides an extension and validation of their results to conditions in which single-cell responses are available. performed electrophysiological recordings in mouse cortex to examine ‘mismatch’ responses, where neurons respond to differences in visual cue and motor signals from running. The authors argued that these responses derive from visual features rather than the mismatch, showing that these perturbation responses might be explained by preferential tuning to low temporal frequencies. The authors use our two-photon imaging dataset to demonstrate a difference in temporal frequency tuning across cortical layers, with neurons in superficial layers being tuned to lower frequencies, supporting the fact that mismatch responses are predominantly observed in superficial layers. While this use case is perhaps one of the simplest, it is an elegant demonstration of gaining validation for implications that emerge from one’s own experiments. compared spiking activity from the Neuropixels dataset to calcium-dependent fluorescence changes recorded in their laboratory. Their analysis focused on the precision with which the orientation of static gratings can be decoded from activity in visual cortex. Using their own two-photon calcium imaging dataset that consisted of up to 50,000 simultaneously recorded neurons, they found that it was possible to use neural activity to discriminate orientations that differ by less than 0.4°, about a factor of 100 better than reported behavioral thresholds in mice. As an important control, they showed that the trial-to-trial variability in evoked responses to static gratings was nearly identical between their two-photon data and our Neuropixels electrophysiology data, indicating that their main result was not likely to depend on the recording modality. This use case is noteworthy because the preprint containing this comparison appeared less than a month after our dataset became publicly available. directly compared the Allen Neuropixels dataset with Neuropixels recordings from LGN and V1 carried out locally. They first analyzed gamma-band coherence between these two structures in the Allen Brain Observatory dataset and found evidence in support of their hypothesis that inter-regional coherence is primarily driven by afferent inputs. This contrasts with the ‘communication through coherence’ hypothesis , which posits that pre-existing inter-regional coherence is necessary for information transfer. They then performed a separate set of Neuropixels recordings in which they found that silencing cortex (via optogenetic activation of somatostatin-positive interneurons) did not change the degree of coherence between LGN and V1, indicating that V1 phase-locking is inherited from LGN, further supporting their hypothesis. This is an insightful example of how a survey dataset can be used to test a hypothesis, followed by a set of more specific follow-up experiments that refine the initial findings. These surveys have also been used in a variety of educational contexts. Many computational neuroscience summer courses have presented them as potential source of student projects. This includes the Allen Institute’s own Summer Workshop on the Dynamic Brain as well as the Cold Spring Harbor Neural Data Science and Computational Neuroscience: Vision courses; Brains, Minds, and Machines Summer Course at the Marine Biological Laboratory; and the Human Brain Project Education Program. Indeed, in some cases these projects have led to publications . Beyond these summer courses, these datasets are discussed in undergraduate classrooms, enabling students to learn computational methods with real data rather than toy models. This includes classes at the University of Washington, Brown University, and the University of California, San Diego. To gain additional insight into the perspectives of end users, we interviewed eight scientists who published papers based on Allen Brain Observatory data. There were three primary reasons why users chose to analyze these datasets: (1) they were interested in the datasets’ unique features, such as the number of recorded regions; (2) they lacked the ability to collect data from a particular modality (e.g., an imaging lab wanted to analyze electrophysiology data); or (3) they wanted to validate their own findings using an independent dataset. Although most users initially tried to access the data via the AllenSDK Python package, several found it easier to download the NWB files directly after exporting a list of URLs, particularly if they were using Matlab for analysis. Common challenges included slow data download speeds, understanding the details of preprocessing steps, and data format changes (e.g., the original Neuropixels files were subsequently updated to adhere to the latest NWB standard, which broke compatibility with older versions of the AllenSDK). In most cases, reaching out to scientists at the Allen Institute cleared up these issues. Users also encountered obstacles related to the scale of the data: some scientists needed to learn how to submit jobs to their local high-performing computing cluster to speed-up analysis or to develop new methods for retrieving and organizing data. But the size of the dataset was also one of its biggest advantages: “From a scientific perspective, facing such a rich dataset can be overwhelming at the beginning—there are so many questions that could be addressed with it and it’s easy to get lost. In my case, it was a blessing rather than a curse; my initial question was simply how different areas included in the dataset are modulated by ripples. Having such a wide coverage of the hippocampal axis, I later asked myself whether ripples recorded on different probes differentially modulate neuronal activity outside of the hippocampus, which led me to some interesting and unexpected findings.” (Noam Nitzan, NYU) In general, researchers were enthusiastic about this resource: “I looked at several open data sets, and I quickly realized that the Allen Brain Observatory Neuropixels data set was the best documented open data set I found. The intuitive packaging in the NWB format, as well as the systematic repetition of experiments with a comparatively high number of mice and single units in various visual areas, made the decision to use the Allen dataset very easy.” (Marius Schneider, Ernst Strüngmann Institute) Journal referees seemed to respond positively to the use of Allen Brain Observatory data, although one user reported that a reviewer was concerned about their ability to adequately validate data they did not collect themselves. For future data releases, several users requested experiments with different types of visual stimuli, ideally chosen through interactions with the wider community. Although it is too early to assess the long-term relevance of the first two Allen Brain Observatory datasets, the more than 100 publications that mined this data over the last 6 years testify to its immediate impact. Our data has been used for a wide array of applications, many of which we did not envision when we designed the surveys. We attribute this success to several factors, including the scale of the dataset (tens of thousands of neurons across hundreds of subjects), our extensive curation and documentation efforts (in publications, white papers, and websites), a robust software kit for accessing and analyzing the data (the AllenSDK), and a well-organized outreach program (involving tutorials at conferences and a dedicated summer workshop). One key lesson we learned is to facilitate different types of data reuse, as illustrated by the examples above. While many users primarily care about spike times or fluorescence traces, others require raw data. Because of this, it was fortuitous that we provided access to both . Sharing the data in a way that is flexible and well documented reduces constraints on which questions can be addressed, and is thus paramount for facilitating reuse. Indeed, while many papers leveraged the datasets to examine the visual functional properties of neurons or brain areas, many others used these data in a way that was agnostic to the visual context of the underlying experiments. We hope to see the sharing of both raw and processed cellular physiology data soon become ubiquitous. However, we know that our surveys were contingent on the efforts of a large team, including scientists from multiple disciplines, hardware and software engineers, research associates, and project managers. Assembling similar resources is untenable for most academic labs. Fortunately, there are ongoing developments that will lower the barriers to sharing and reusing data: increased standardization and cloud-based analysis tools . Increased standardization The success of data reuse rests on the FAIR Principles: data must be Findable, Accessible, Interoperable, and Reusable . In other words, prospective analysts must be able to easily identify datasets appropriate to their needs and know how to access and use the data assets. This is best accomplished if data is stored in standardized formats, with common conventions for rich metadata and easy-to-use tools for search and visualization. The Allen Institute has invested heavily in developing and promoting Neurodata Without Borders (NWB) as a standard data model and interchange format for neurophysiology data . NWB has been criticized for being both too restrictive (as it often takes a dedicated programmer to generate format-compliant files from lab-specific data) and not restrictive enough (as it does not enforce sufficient metadata conventions, especially related to behavioral tasks). Nevertheless, there are overwhelming advantages to having common, language-agnostic formatting conventions across the field. Building a rich ecosystem of analysis and visualization tools based on NWB will incentivize additional labs to store their data in this format and even to directly acquire data in NWB files to make data immediately shareable (this is already possible for electrophysiological recordings using the Open Ephys GUI; ). We envision a future in which it will require less effort for neurophysiologists to comply with community-wide standards than to use their own idiosyncratic conventions because standardized formats serve as a gateway to a host of pre-existing, carefully validated analysis packages. Standardized metadata conventions are also critical for promoting data reuse. Our surveys are accompanied by extensive white papers, code repositories, and tutorials that detail the minutiae of our methods and tools, beyond the standard ‘Methods’ section in publications (see for links). For the community at large, a more scalable solution is needed. Standardized and machine-readable metadata needs to extend beyond administrative metadata (describing authors, institutions, and licenses) to include thorough and detailed experimental conditions and parameters in a self-contained manner. As data sharing becomes more widespread, standardization of metadata will be particularly important for reducing ‘long tail’ effects in which a small number of datasets are reused extensively, while others are disregarded, as observed in the reuse of CRCNS data. To avoid a situation in which publicly available datasets from more focused studies are overlooked, all these studies should be indexed by a single database that can be filtered by relevance, making it much easier for researchers to identify data that is appropriate for their needs. The recently launched Distribute Archives for Neurophysiology Data ( DANDI ) addresses this concern by enforcing the use of NWB for all shared datasets . In the two years since the first dataset was uploaded, the archive now hosts more than 100 NWB-formatted datasets accessible via download links, a command-line interface, or within a cloud-based JupyterHub environment. Box 1. Web resources for Allen Brain Observatory Visual Coding datasets White papers describing the surveys 2P – http://help.brain-map.org/display/observatory/Documentation Neuropixels – https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels Code repositories AllenSDK – https://github.com/alleninstitute/allensdk 2P – https://github.com/AllenInstitute/visual_coding_2p_analysis Neuropixels – https://github.com/AllenInstitute/neuropixels_platform_paper Tutorials 2P – https://allensdk.readthedocs.io/en/latest/brain_observatory.html Neuropixels – https://allensdk.readthedocs.io/en/latest/visual_coding_neuropixels.html The human neuroimaging field has faced similar challenges. To address the lack of standardization across public datasets, the community spearheaded the development of the Brain Imaging Data Structure (BIDS), a set of schemas for storing MRI volumes along with associated behavioral and physiological measures . NWB shares many features of the BIDS standard, including a hierarchical structure, separation of raw and derived data, and support for extensions. BIDS was essential for the success of OpenNeuro , a public neuroimaging data archive which, as of 2021, included data from over 20,000 subjects . Given the related aims of OpenNeuro and DANDI, there are many opportunities for the leaders and maintainers of these resources to learn from one another. While the adoption of consistent data formatting conventions is a welcome development, there are also benefits to greater standardization of protocols, hardware, and software used for data collection. One way this can be achieved is through coordinated cross-laboratories experiments, such as those implemented by the International Brain Laboratory (IBL), a consortium that uses Neuropixels to survey responses across the entire mouse brain in a visual decision-making task . It can also be beneficial to carry out smaller-scale studies on infrastructure built for surveys, as we have done as part of the "OpenScope" project. OpenScope allows members of the wider community to propose experiments to be run by Allen Brain Observatory staff . This lowers the barriers to generating high-quality, standards-compliant data, especially for labs whose work is primarily computational. Similarly, the IBL is now entering a phase in which member laboratories conduct more focused studies that take advantage of existing rigs and data pipelines. To encourage data sharing, the field of neurophysiology also needs greater standardization in the way data mining is tracked and credited. Digital object identifiers (DOIs) are an essential first step; we regret not making them an integral part of the Visual Coding data releases. However, they have not solved the problem of discovering reuse as they are not always included in publications. It is more common to include a reference to the original paper in which the dataset was described, but this makes it difficult to distinguish instances of reuse from other types of citations. Currently the onus is on those releasing the data to keep track of who accesses it. To take one example, the cai-1 calcium indicator calibration dataset from the Svoboda Lab at HHMI Janelia Research Campus only has five citations tracked in Google Scholar. Yet a deeper dive into the literature reveals that this dataset has been reused in a wide range of publications and conference papers that benchmark methods for inferring spike rate from calcium fluorescence signals, of which there are likely over 100 in total. Many of these papers only cite the original publication associated with this dataset , refer to the repository from which the data was downloaded (CRCNS), or do not cite the data source at all. The lack of an agreed-upon method for citing datasets (like we have for journal articles) is a loss for the community as it hinders our ability to give appropriate credit to those responsible for collecting widely used datasets. A simple, widely accepted method for citing data would benefit all authors as it has been shown that publications within the astrophysics community that provide links to the underlying data gain more citations on average than those that do not . Cloud-based analysis tools To enable more efficient data mining, end users should ideally not need to download data at all. This is particularly true as the volume of data keeps on growing (e.g., a single Allen Brain Observatory Neuropixels session generates about 1.2 TB of raw data). Therefore, the goal should be to bring users to the data, rather than the data to users. This is supported by our interviews with end users who cited slow download speeds as a key challenge. Generic analysis tools, such as Amazon’s SageMaker and Google’s Colab, already make it possible to set up a familiar coding environment in the cloud. However, we are most excited about tools that lower the barriers and the costs of cloud analysis for scientists. Some of the most promising tools include DataJoint , DandiHub , NeuroCAAS , Binder , and Code Ocean (many of which are built on top of the powerful Jupyter platform). All of these are aimed at improving the reproducibility of scientific analyses, while shielding users from the details of configuring cloud services. Cloud-based analysis is not a panacea. Although individual tools can be vendor-agnostic, there will be a push to centralize around a single cloud platform, given the high cost transferring data out of cloud storage. This could lead to a single company monopolizing the storage of neurophysiology data; it would therefore be prudent to invest in a parallel distribution system that is controlled by scientists . In addition, it is (perhaps not surprisingly) notoriously easy for unwary users to provision expensive cloud computing resources; a single long-running analysis on a powerful cloud workstation could exhaust a lab’s entire annual budget without safeguards in place. Despite these drawbacks, we believe that a move to cloud-based analysis will be essential for reducing the friction involved in adopting new datasets. We plan to move toward supporting a cloud-native sharing model more directly in our upcoming data releases. Fostering a culture of data reuse The value of open data is best realized when it is conveniently accessible. Whether this involves new discoveries or comparing results across studies, data mining is vital for progress in neuroscience, especially as the field as a whole shifts toward more centralized ‘Observatories’ for mice and non-human primates . The BRAIN Initiative has invested considerable resources in advancing instruments and methods for recording a large number of neurons in more sophisticated behavioral contexts. Yet the analytical methods for understanding and interpreting large datasets are lagging, as many of our theoretical paradigms emerged from an era of small-scale recordings . In order to develop theories that can explain brain-wide cellular neurophysiology data, it is critical to maximize data reuse. This poses a set of challenges. Any time a scientist uses a new dataset, they must comprehend both how to access and manipulate it and decide whether it is appropriate for their question. The latter is the actual scientific challenge, and is where scientists should expend the bulk of their energy. To facilitate this, we first need increased compliance with standards and enhanced tooling around those standards. The more straightforward and intuitive it is to analyze a particular dataset, the more likely it is to be reused. The full burden of refining and adhering to these standards should not fall on the good intentions of individual researchers; instead, we need funding agencies and institutions to recognize the value of open data and allocate resources to facilitate the use of such standards. Everyone benefits when scientists can focus on actual biology rather than on the technical challenges of sharing and accessing data. Second, we need our evaluation of data reuse to ensure that researchers have identified data assets pertinent to their questions and have accounted for the limitations of an experimental paradigm. For instance, we have shown that a naïve comparison of cellular properties measured in the same visual areas across our Neuropixels and two-photon calcium imaging datasets reveals substantial discrepancies . These can only be reconciled by accounting for the bias inherent in each recording modality, as well as the data processing steps leading to the calculation of functional metrics. Effective data reuse requires that we, as a field, focus more of our energies on better communicating these important technical factors and holding researchers accountable for understanding them when they analyze someone else’s data. Neuroscientists have traditionally been taught to address questions by collecting new data. As data sharing becomes more prevalent, neuroscientists’ first instinct should instead be to search for existing data that may offer insights into the problem at hand, whether or not it was originally intended for this purpose. Even in situations where the ‘perfect’ dataset does not yet exist, it is likely that researchers can exploit available data to refine a broad question into one that is more focused, and thus experimentally more tractable. Just as young scientists are trained to discover, interpret, and cite relevant publications, it is imperative that they are also taught to effectively identify, evaluate, and mine open datasets. The success of data reuse rests on the FAIR Principles: data must be Findable, Accessible, Interoperable, and Reusable . In other words, prospective analysts must be able to easily identify datasets appropriate to their needs and know how to access and use the data assets. This is best accomplished if data is stored in standardized formats, with common conventions for rich metadata and easy-to-use tools for search and visualization. The Allen Institute has invested heavily in developing and promoting Neurodata Without Borders (NWB) as a standard data model and interchange format for neurophysiology data . NWB has been criticized for being both too restrictive (as it often takes a dedicated programmer to generate format-compliant files from lab-specific data) and not restrictive enough (as it does not enforce sufficient metadata conventions, especially related to behavioral tasks). Nevertheless, there are overwhelming advantages to having common, language-agnostic formatting conventions across the field. Building a rich ecosystem of analysis and visualization tools based on NWB will incentivize additional labs to store their data in this format and even to directly acquire data in NWB files to make data immediately shareable (this is already possible for electrophysiological recordings using the Open Ephys GUI; ). We envision a future in which it will require less effort for neurophysiologists to comply with community-wide standards than to use their own idiosyncratic conventions because standardized formats serve as a gateway to a host of pre-existing, carefully validated analysis packages. Standardized metadata conventions are also critical for promoting data reuse. Our surveys are accompanied by extensive white papers, code repositories, and tutorials that detail the minutiae of our methods and tools, beyond the standard ‘Methods’ section in publications (see for links). For the community at large, a more scalable solution is needed. Standardized and machine-readable metadata needs to extend beyond administrative metadata (describing authors, institutions, and licenses) to include thorough and detailed experimental conditions and parameters in a self-contained manner. As data sharing becomes more widespread, standardization of metadata will be particularly important for reducing ‘long tail’ effects in which a small number of datasets are reused extensively, while others are disregarded, as observed in the reuse of CRCNS data. To avoid a situation in which publicly available datasets from more focused studies are overlooked, all these studies should be indexed by a single database that can be filtered by relevance, making it much easier for researchers to identify data that is appropriate for their needs. The recently launched Distribute Archives for Neurophysiology Data ( DANDI ) addresses this concern by enforcing the use of NWB for all shared datasets . In the two years since the first dataset was uploaded, the archive now hosts more than 100 NWB-formatted datasets accessible via download links, a command-line interface, or within a cloud-based JupyterHub environment. Box 1. Web resources for Allen Brain Observatory Visual Coding datasets White papers describing the surveys 2P – http://help.brain-map.org/display/observatory/Documentation Neuropixels – https://portal.brain-map.org/explore/circuits/visual-coding-neuropixels Code repositories AllenSDK – https://github.com/alleninstitute/allensdk 2P – https://github.com/AllenInstitute/visual_coding_2p_analysis Neuropixels – https://github.com/AllenInstitute/neuropixels_platform_paper Tutorials 2P – https://allensdk.readthedocs.io/en/latest/brain_observatory.html Neuropixels – https://allensdk.readthedocs.io/en/latest/visual_coding_neuropixels.html The human neuroimaging field has faced similar challenges. To address the lack of standardization across public datasets, the community spearheaded the development of the Brain Imaging Data Structure (BIDS), a set of schemas for storing MRI volumes along with associated behavioral and physiological measures . NWB shares many features of the BIDS standard, including a hierarchical structure, separation of raw and derived data, and support for extensions. BIDS was essential for the success of OpenNeuro , a public neuroimaging data archive which, as of 2021, included data from over 20,000 subjects . Given the related aims of OpenNeuro and DANDI, there are many opportunities for the leaders and maintainers of these resources to learn from one another. While the adoption of consistent data formatting conventions is a welcome development, there are also benefits to greater standardization of protocols, hardware, and software used for data collection. One way this can be achieved is through coordinated cross-laboratories experiments, such as those implemented by the International Brain Laboratory (IBL), a consortium that uses Neuropixels to survey responses across the entire mouse brain in a visual decision-making task . It can also be beneficial to carry out smaller-scale studies on infrastructure built for surveys, as we have done as part of the "OpenScope" project. OpenScope allows members of the wider community to propose experiments to be run by Allen Brain Observatory staff . This lowers the barriers to generating high-quality, standards-compliant data, especially for labs whose work is primarily computational. Similarly, the IBL is now entering a phase in which member laboratories conduct more focused studies that take advantage of existing rigs and data pipelines. To encourage data sharing, the field of neurophysiology also needs greater standardization in the way data mining is tracked and credited. Digital object identifiers (DOIs) are an essential first step; we regret not making them an integral part of the Visual Coding data releases. However, they have not solved the problem of discovering reuse as they are not always included in publications. It is more common to include a reference to the original paper in which the dataset was described, but this makes it difficult to distinguish instances of reuse from other types of citations. Currently the onus is on those releasing the data to keep track of who accesses it. To take one example, the cai-1 calcium indicator calibration dataset from the Svoboda Lab at HHMI Janelia Research Campus only has five citations tracked in Google Scholar. Yet a deeper dive into the literature reveals that this dataset has been reused in a wide range of publications and conference papers that benchmark methods for inferring spike rate from calcium fluorescence signals, of which there are likely over 100 in total. Many of these papers only cite the original publication associated with this dataset , refer to the repository from which the data was downloaded (CRCNS), or do not cite the data source at all. The lack of an agreed-upon method for citing datasets (like we have for journal articles) is a loss for the community as it hinders our ability to give appropriate credit to those responsible for collecting widely used datasets. A simple, widely accepted method for citing data would benefit all authors as it has been shown that publications within the astrophysics community that provide links to the underlying data gain more citations on average than those that do not . To enable more efficient data mining, end users should ideally not need to download data at all. This is particularly true as the volume of data keeps on growing (e.g., a single Allen Brain Observatory Neuropixels session generates about 1.2 TB of raw data). Therefore, the goal should be to bring users to the data, rather than the data to users. This is supported by our interviews with end users who cited slow download speeds as a key challenge. Generic analysis tools, such as Amazon’s SageMaker and Google’s Colab, already make it possible to set up a familiar coding environment in the cloud. However, we are most excited about tools that lower the barriers and the costs of cloud analysis for scientists. Some of the most promising tools include DataJoint , DandiHub , NeuroCAAS , Binder , and Code Ocean (many of which are built on top of the powerful Jupyter platform). All of these are aimed at improving the reproducibility of scientific analyses, while shielding users from the details of configuring cloud services. Cloud-based analysis is not a panacea. Although individual tools can be vendor-agnostic, there will be a push to centralize around a single cloud platform, given the high cost transferring data out of cloud storage. This could lead to a single company monopolizing the storage of neurophysiology data; it would therefore be prudent to invest in a parallel distribution system that is controlled by scientists . In addition, it is (perhaps not surprisingly) notoriously easy for unwary users to provision expensive cloud computing resources; a single long-running analysis on a powerful cloud workstation could exhaust a lab’s entire annual budget without safeguards in place. Despite these drawbacks, we believe that a move to cloud-based analysis will be essential for reducing the friction involved in adopting new datasets. We plan to move toward supporting a cloud-native sharing model more directly in our upcoming data releases. The value of open data is best realized when it is conveniently accessible. Whether this involves new discoveries or comparing results across studies, data mining is vital for progress in neuroscience, especially as the field as a whole shifts toward more centralized ‘Observatories’ for mice and non-human primates . The BRAIN Initiative has invested considerable resources in advancing instruments and methods for recording a large number of neurons in more sophisticated behavioral contexts. Yet the analytical methods for understanding and interpreting large datasets are lagging, as many of our theoretical paradigms emerged from an era of small-scale recordings . In order to develop theories that can explain brain-wide cellular neurophysiology data, it is critical to maximize data reuse. This poses a set of challenges. Any time a scientist uses a new dataset, they must comprehend both how to access and manipulate it and decide whether it is appropriate for their question. The latter is the actual scientific challenge, and is where scientists should expend the bulk of their energy. To facilitate this, we first need increased compliance with standards and enhanced tooling around those standards. The more straightforward and intuitive it is to analyze a particular dataset, the more likely it is to be reused. The full burden of refining and adhering to these standards should not fall on the good intentions of individual researchers; instead, we need funding agencies and institutions to recognize the value of open data and allocate resources to facilitate the use of such standards. Everyone benefits when scientists can focus on actual biology rather than on the technical challenges of sharing and accessing data. Second, we need our evaluation of data reuse to ensure that researchers have identified data assets pertinent to their questions and have accounted for the limitations of an experimental paradigm. For instance, we have shown that a naïve comparison of cellular properties measured in the same visual areas across our Neuropixels and two-photon calcium imaging datasets reveals substantial discrepancies . These can only be reconciled by accounting for the bias inherent in each recording modality, as well as the data processing steps leading to the calculation of functional metrics. Effective data reuse requires that we, as a field, focus more of our energies on better communicating these important technical factors and holding researchers accountable for understanding them when they analyze someone else’s data. Neuroscientists have traditionally been taught to address questions by collecting new data. As data sharing becomes more prevalent, neuroscientists’ first instinct should instead be to search for existing data that may offer insights into the problem at hand, whether or not it was originally intended for this purpose. Even in situations where the ‘perfect’ dataset does not yet exist, it is likely that researchers can exploit available data to refine a broad question into one that is more focused, and thus experimentally more tractable. Just as young scientists are trained to discover, interpret, and cite relevant publications, it is imperative that they are also taught to effectively identify, evaluate, and mine open datasets. |
Factors associated with compassion fatigue and compassion satisfaction in obstetrics and gynaecology nurses: A cross‐sectional study | 1126a3cb-3164-4a33-af0e-2202a3632652 | 10333879 | Gynaecology[mh] | INTRODUCTION Compassion is the understanding and sharing of the emotional state of others. Compassion has long played a positive and active role in both academic research and social life. Research has found that compassion promotes pro‐social behaviour (Singer & Klimecki, ), improves interpersonal relationships and increases an individual's level of well‐being (Saunders, ). According to the International Council of Nurses Code of Ethics for nurses, compassion is one of the eight professional values required of nurses (ICN, ). Compassion is both an essential quality and one of the required competencies for nurses. However, beneath these positive auras, compassion may also have certain negative effects. Due to the high workload of prolonged contact with illness, disability and death, nurses' compassion is highly susceptible to compassion fatigue. Studies have shown that approximately two in five clinical nurses surveyed suffer from compassion fatigue (Duarte & Pinto‐Gouveia, ), which has adverse physical, psychological, emotional and cognitive effects (Alharbi et al., ). It is also known as the ‘cost of care’. In order to scientifically and effectively reduce the generation of compassion fatigue and improve the compassion satisfaction of nurses, it is crucial to identify the influencing factors that induce compassion fatigue and compassion satisfaction generation in nurses. Several studies have been conducted in clinical departments. For example, a study in an intensive care unit showed that female nurses aged between 18 and 25 years, with a bachelor's degree and 1–3 years of service had higher levels of compassion fatigue (İlter et al., ); for oncology and palliative care nurses, long patient stays and high mortality rates trigger compassion fatigue while decreasing compassion satisfaction (Frey et al., ; Jarrad & Hammad, ); women and the experience of traumatic events in their lives are exacerbating compassion fatigue, while poor work environment, poor colleague relationships and irregular working hours are influencing factors of low compassion satisfaction (Kartsonaki et al., ). However, no studies have focused on compassion fatigue and compassion satisfaction among obstetrics and gynaecology nurses. In recent years, with the opening of the three‐child policy, an ageing population and the promotion of assisted reproductive technologies, the incidence of obstetric and gynaecological diseases has increased, nursing workload has increased and nurses are under correspondingly greater stress (Favrod et al., ). Obstetrics and gynaecology nurses are prone to compassion fatigue and low compassion satisfaction as they serve vulnerable groups such as women or children for long periods of time. Persistent compassion fatigue leads to decreased productivity, increases the incidence of adverse care events and directly reduces the quality of care and patient satisfaction (Labrague & de Los Santos, ). Therefore, this study aimed to investigate the current status of compassion fatigue among obstetrics and gynaecology nurses and analyse its influencing factors. And based on conservation of resources theory, it further explored the influence of lack of professional efficacy on compassion fatigue and the role of social support as a bridge between the two. Thus, prevention and intervention targets were identified to improve the quality of obstetrical and gynaecological care. 1.1 Background The term ‘compassion fatigue’ was originally described by Joinson to refer to the emotional, physical and psychological exhaustion of healthcare workers as a result of work‐related stress. Compassion fatigue was prevalent among nurses, and it not only decreased work efficiency, but also increased the incidence of adverse nursing events, which directly reduced the quality of care and patient satisfaction (Ondrejková & Halamová, ). Therefore, compassion fatigue was also named ‘cost of caring’ (Figley, ). Compassion fatigue caused some physical symptoms and mental symptoms (Alharbi et al., ). The current state of compassion fatigue among nurses cannot be ignored. Compassion fatigue has been studied in various contexts and was found in several areas of health care: intensive care (İlter et al., ), emergency (Yu & Gui, ) and paediatrics (Kartsonaki et al., ). We all know that obstetrics and gynaecology is a special unit in hospitals because the majority of patients are women during pregnancy, childbirth, postpartum and illness. Nurses often witness women's most stressful moments, trauma and pain, and may absorb patients' pain and suffering while experiencing traumatic distress (Berger & Gelkopf, ). Chronic compassion fatigue led to decreased quality of care, reduced job satisfaction for nurses and increased turnover (Labrague & de Los Santos, ). The causes of compassion fatigue are not yet clear. Conservation of resources theory was based on the concept that individuals have a tendency to preserve, protect and acquire resources (Hobfoll, ). In 2017, conservation of resources theory was applied to the study of compassion fatigue in nursing (Coetzee & Laschinger, ). When the resources available to caregiver are adequate, caregiver will provide caring and compassionate resources that will help alleviate the patient's suffering. However, once the nurse experienced the lack of understanding from the patients and the lack of support from the hospital leadership, her resources would be consumed more than replenished, that is, a loss of resources occurred, resulting in compassion fatigue for the nurse (Coetzee & Laschinger, ). According to conservation of resources theory, nurses' emotional labour is an important resource and an influencing factor on compassion fatigue. Emotional labour was defined as the management of feelings to create a publicly observable facial and bodily display (Hochschild, ). Hospitals, in order for patients to feel that they are being cared for appropriately and safely, require nurses to work without reflecting the negative emotions they experience to patients, families and colleagues, which greatly increases the level of emotional labour in nursing (Hwang et al., ). Emotional labour leads to emotional dysregulation, which manifests as a conflict between the underlying emotion and the actual emotion expressed. A study found that the level of emotional labour in nurses was strongly associated with the occurrence of compassion fatigue (Barnett et al., ). In addition, a Korean study found that 117 nurses had moderate to high levels of emotional labour, which correlated strongly with compassion fatigue. And 23% of the nurses had medical errors in the past 6 months and had a desire to leave nursing (Kwon et al., ). Prolonged emotional labour can drain nurses, make them feel fatigued and will lead to burnout (Morris & Feldman, ). Burnout was also known as a syndrome of emotional exhaustion, cynicism and lack of professional efficacy (Maslach & Jackson, ). When burnout occurs, the nurse's resources are depleted. Nurses feel that patient poses a threat to their resources and exhibits cynicism and lack of professional efficacy. Some studies have shown that the occurrence of burnout was positively correlated with compassion fatigue (Ruiz‐Fernández et al., ). Frequent emotional labour can lead to burnout or compassion fatigue and reduce the quality of life and work‐related care of nurses (Kwak et al., ). According to conservation of resources theory, when individuals have insufficient internal resources, they look for supportive resources in the work environment to supplement the lost internal resources. Individuals' social support is a typical supportive resource, it helps individuals to regulate the relationship between stress and physical and mental health, thus helping to alleviate compassion fatigue. Social support was defined as the level of helpful social interaction in the workplace from both co‐workers and supervisors (Karasek et al., ). In a study of paediatric oncology nurses, it was found that when nurses felt more social support, it would reduce nurses' compassion fatigue, thus increasing their productivity and well‐being (Sullivan et al., ). In addition, in a study of critical care, nurses who received leadership and administrative support had lower levels of compassion fatigue (Alharbi et al., ). Therefore, social support is an important influential factor in reducing compassion fatigue. In summary, many factors influence the occurrence of compassion fatigue in obstetrics and gynaecology nurses. Most studies focused on the effect of a single factor on compassion fatigue, while ignoring the combined results of multiple factors. Therefore, the purpose of this study was to understand the levels of nurses' compassion fatigue and compassion satisfaction and to examine their relationships with multiple variables. To this end, the research questions for this study were as follows. What are the levels of compassion fatigue and compassion satisfaction among obstetrics and gynaecology nurses? What are the influencing factors of compassion fatigue and compassion satisfaction among obstetrics and gynaecology nurses? Are there any associations between these influencing factors? Background The term ‘compassion fatigue’ was originally described by Joinson to refer to the emotional, physical and psychological exhaustion of healthcare workers as a result of work‐related stress. Compassion fatigue was prevalent among nurses, and it not only decreased work efficiency, but also increased the incidence of adverse nursing events, which directly reduced the quality of care and patient satisfaction (Ondrejková & Halamová, ). Therefore, compassion fatigue was also named ‘cost of caring’ (Figley, ). Compassion fatigue caused some physical symptoms and mental symptoms (Alharbi et al., ). The current state of compassion fatigue among nurses cannot be ignored. Compassion fatigue has been studied in various contexts and was found in several areas of health care: intensive care (İlter et al., ), emergency (Yu & Gui, ) and paediatrics (Kartsonaki et al., ). We all know that obstetrics and gynaecology is a special unit in hospitals because the majority of patients are women during pregnancy, childbirth, postpartum and illness. Nurses often witness women's most stressful moments, trauma and pain, and may absorb patients' pain and suffering while experiencing traumatic distress (Berger & Gelkopf, ). Chronic compassion fatigue led to decreased quality of care, reduced job satisfaction for nurses and increased turnover (Labrague & de Los Santos, ). The causes of compassion fatigue are not yet clear. Conservation of resources theory was based on the concept that individuals have a tendency to preserve, protect and acquire resources (Hobfoll, ). In 2017, conservation of resources theory was applied to the study of compassion fatigue in nursing (Coetzee & Laschinger, ). When the resources available to caregiver are adequate, caregiver will provide caring and compassionate resources that will help alleviate the patient's suffering. However, once the nurse experienced the lack of understanding from the patients and the lack of support from the hospital leadership, her resources would be consumed more than replenished, that is, a loss of resources occurred, resulting in compassion fatigue for the nurse (Coetzee & Laschinger, ). According to conservation of resources theory, nurses' emotional labour is an important resource and an influencing factor on compassion fatigue. Emotional labour was defined as the management of feelings to create a publicly observable facial and bodily display (Hochschild, ). Hospitals, in order for patients to feel that they are being cared for appropriately and safely, require nurses to work without reflecting the negative emotions they experience to patients, families and colleagues, which greatly increases the level of emotional labour in nursing (Hwang et al., ). Emotional labour leads to emotional dysregulation, which manifests as a conflict between the underlying emotion and the actual emotion expressed. A study found that the level of emotional labour in nurses was strongly associated with the occurrence of compassion fatigue (Barnett et al., ). In addition, a Korean study found that 117 nurses had moderate to high levels of emotional labour, which correlated strongly with compassion fatigue. And 23% of the nurses had medical errors in the past 6 months and had a desire to leave nursing (Kwon et al., ). Prolonged emotional labour can drain nurses, make them feel fatigued and will lead to burnout (Morris & Feldman, ). Burnout was also known as a syndrome of emotional exhaustion, cynicism and lack of professional efficacy (Maslach & Jackson, ). When burnout occurs, the nurse's resources are depleted. Nurses feel that patient poses a threat to their resources and exhibits cynicism and lack of professional efficacy. Some studies have shown that the occurrence of burnout was positively correlated with compassion fatigue (Ruiz‐Fernández et al., ). Frequent emotional labour can lead to burnout or compassion fatigue and reduce the quality of life and work‐related care of nurses (Kwak et al., ). According to conservation of resources theory, when individuals have insufficient internal resources, they look for supportive resources in the work environment to supplement the lost internal resources. Individuals' social support is a typical supportive resource, it helps individuals to regulate the relationship between stress and physical and mental health, thus helping to alleviate compassion fatigue. Social support was defined as the level of helpful social interaction in the workplace from both co‐workers and supervisors (Karasek et al., ). In a study of paediatric oncology nurses, it was found that when nurses felt more social support, it would reduce nurses' compassion fatigue, thus increasing their productivity and well‐being (Sullivan et al., ). In addition, in a study of critical care, nurses who received leadership and administrative support had lower levels of compassion fatigue (Alharbi et al., ). Therefore, social support is an important influential factor in reducing compassion fatigue. In summary, many factors influence the occurrence of compassion fatigue in obstetrics and gynaecology nurses. Most studies focused on the effect of a single factor on compassion fatigue, while ignoring the combined results of multiple factors. Therefore, the purpose of this study was to understand the levels of nurses' compassion fatigue and compassion satisfaction and to examine their relationships with multiple variables. To this end, the research questions for this study were as follows. What are the levels of compassion fatigue and compassion satisfaction among obstetrics and gynaecology nurses? What are the influencing factors of compassion fatigue and compassion satisfaction among obstetrics and gynaecology nurses? Are there any associations between these influencing factors? METHODS 2.1 Design An online cross‐sectional study was conducted. 2.2 Instrument with validity and reliability The questionnaires used in this study included socio‐demographic characteristics, the Chinese version of the Compassion Fatigue Scale, the Maslach Burnout Inventory General Survey (MBI‐GS), the Emotional Labour Scale and the Social Support Rate Scale (SSRS). All questionnaires were reviewed by five professors (three professors in the field of obstetrics and gynaecology, one professor in psychological care and one professor in care management) in the field and then used. 2.2.1 Socio‐demographic characteristics Self‐designed after a pre‐review of the literature. This includes age, marital status, the only‐child, number of children, education level, work experience, professional title, employment status, night shift, average weekly hours and physical condition. 2.2.2 Chinese version of the Compassion Fatigue Scale The Professional Quality of Life Scale (ProQOL) was revised by Stamm to form the Chinese version of the Compassion Fatigue Scale, which was used in this study. The scale includes three dimensions: compassion satisfaction, burnout and secondary traumatic stress, each with 10 entries, for a total of 30 entries. The scale is based on Likert 5‐point scale, with the frequency of occurrence ranging from ‘none’ to ‘always’, and the reverse scoring method is used for items 14, 15, 17 and 29. The total score for each of the three dimensions is 50, and the threshold values are <37, >27 and >17 respectively. The total score of one dimension exceeded the threshold value for mild empathy fatigue, two dimensions exceeded the threshold value for moderate compassion fatigue and all three dimensions exceeded the threshold value for high compassion fatigue. In this study, the sum of the scores of the two dimensions was used as the compassion fatigue score. The total Cronbach's alpha coefficient of the scale in this study was 0.821, the Cronbach's alpha coefficient of compassion fatigue was 0.820 and the Cronbach's alpha coefficient of compassion satisfaction was 0.882. 2.2.3 Maslach Burnout Inventory General Survey The MBI‐GS scale formulated was used, which includes 15 items (Maslach et al., ). Scores range from ‘never (0)’ to ‘very frequently (6)’. The scale is divided into three dimensions: cynicism, emotional exhaustion and lack of professional efficacy. Cynicism and emotional exhaustion are positive scores, that is, the higher the score, the more serious the degree of job burnout. However, the lack of professional efficacy dimension is scored in reverse, that is, the lower the score, the more obvious the lack of professional efficacy. And the sum of the scores of the two dimensions of cynicism and emotional exhaustion was used as the job burnout score. The total Cronbach's alpha coefficient of the scale in this study was 0.885, the Cronbach's alpha coefficient of the total score of the two dimensions was 0.943 and the Cronbach's alpha coefficient of the low achievement dimension was 0.902. 2.2.4 Emotional Labour Scale The Chinese Emotional Labour Scale for Nurses compiled by Grandey was used, which has sub‐categories for surface acting (seven items), emotional expression requirements (four items) and deep acting (three items). Each item was measured using a 6‐point Likert scale from 1 point (‘strongly disagree’) to 6 points (‘strongly agree’). The total score ranges from 14 to 84, with higher scores indicating higher levels of emotional labour. In this study, the total Cronbach's α coefficient of the scale was 0.870. 2.2.5 Social Support Rate Scale The SSRS was originally developed by Xiao Shuiyuan (Yu et al., ), including subjective support, objective support and support utilization, with a total of 10 entries, of which entries 1–4 and 8–10 are single‐choice questions, each entry has four options, and the first, second, third and fourth answers are scored 1, 2, 3 and 4 respectively; entry 5 has five options, A, B, C, D and E, and each option is scored from ‘none’ to ‘fully support’. Each option from ‘none’ to ‘fully support’ will be scored from 1 to 4 points, and the score of the entry will be the sum of the scores of each option; entries 6 and 7 will be scored 0 points if you answer ‘no source’, and 0 points if you choose from ‘the following sources’. If you choose from the ‘following sources’, you will be given several points. The total score of the scale ranged from 12 to 66, and the higher the total score, the more social support was received. The total Cronbach's alpha coefficient for the scale in this study was 0.815. 2.3 Sampling and recruitment This is a cross‐sectional study, using convenience sampling, in which obstetrics and gynaecology nurses from January to February 2022 from five tertiary care hospitals in ‘XX’ were selected for recruitment. We collected data through a mobile phone questionnaire star mini programme. After the questionnaire was created, the mini programme generated a two‐dimensional code, and the investigators asked participants to carefully review the informed consent form and then fill out the questionnaire anonymously. 2.4 Sample size and power Sample size calculation formula: N = [( t α/2 + t β ) S / δ ] 2 . Interpretation: α = 0.05, β = 0.10, power (1− β ) = 0.90. t α,∞ = t 0.05, ∞ = 1.96; t β,∞ = t 0.10, ∞ = 1.645. S is the standard deviation obtained from the pre‐experiment. δ is the allowable error, which is set by 0.25 times or 0.50 times the standard deviation according to the literature for cases where the allowable error level is not given in a professional sense. N is the sample size, and 208 samples are obtained by calculation. If a 20% error rate is set, 250 are obtained. 2.5 Quality appraisal Design: The study participants were selected according to the inclusion and exclusion criteria, exclusion bias was controlled, the purpose of the study was informed and consent was sought from the study participants to ensure the quality of the survey. Implementation: A uniform guideline was used to inform the survey about the entries of the questionnaire and the precautions for filling it out, so that the study participants could obtain cooperation. If there are any questions, the researcher or investigator promptly answers them and provides objective guidance to fill them out, requiring the survey participants to fill them out anonymously and independently, so as to control confounding bias. Data collation and analysis: After data collection, the investigator checked and accepted the returned questionnaires one by one, eliminating invalid questionnaires such as missing items ≥5%, misfilled, regular responses and identical questionnaires. The data entry was done by two‐person double‐computer entry method, and the data were compared item by item to ensure the accuracy of the data before statistical analysis. According to the nature of the variables and the purpose of the study, appropriate statistical analysis methods were selected to ensure the reliability of the study results. 2.6 Population and sample There are 11 public hospitals in ‘XX’, of which five tertiary hospitals containing obstetrics and gynaecology departments (including four grade A hospitals and one grade B hospital), with an estimated overall number of nurses 444. In this study, obstetrics and gynaecology nurses in public tertiary hospitals containing obstetrics and gynaecology departments in ‘XX’ area were studied as a whole, and a total of 329 cases were investigated, with a valid sample of 311 cases. A convenience sampling method was used, and according to the formula, the minimum sample size was 250, so this sample of 311 cases could represent the obstetrics and gynaecology nurses in the whole ‘XX’ tertiary hospitals. 2.7 Inclusion and/or exclusion criteria The inclusion criteria were as follows: (1) working registered nurses (midwives should hold a maternal and child health certificate); (2) more than 1 year of work experience. Intern nurses, nurses who were studying, nurses on rotation or nurses who were on leave for various reasons during the survey period were excluded from the study. 2.8 Data analysis The data were checked by Excel 2019 and analysed by SPSS 24.0. Categorical variables were expressed as frequency and percentage, continuous variables were described by mean ± standard deviation. Demographic data were analysed by univariate analysis, including independent samples T test, one‐way ANOVA test and Kruskal–Wallis test. Pearson's correlation analysis was used to access the relationships between the two variables and Spearman's correlation analysis was used when the data did not conform to normal distribution. The influencing factors of variables were evaluated by stepwise multiple linear regression analysis. Harman's single factor analysis was performed to test the degree of variation. Meanwhile, model 4 and model 8 in the Process macro of SPSS software were conducted to analyse the mediating effect, a level of p < 0.05 was accepted as statistically significant difference. Bootstrap procedure (5000 duplicate samples) was performed to test the significance of the mediating effect and 95% confidence interval (CI) without zero indicates a significant indirect effect. 2.9 Ethical considerations This study was approved by the Ethics Committee of ‘XX’ (REDACTED). All participants provided informed consent and voluntarily participated in the study, which was conducted anonymously. Their information was confidential. All information collected was kept by the investigator, and only the investigator had access to the survey information. All methods used in this study were in accordance with the principles of the Institutional Research Committee and the Declaration of Helsinki. Design An online cross‐sectional study was conducted. Instrument with validity and reliability The questionnaires used in this study included socio‐demographic characteristics, the Chinese version of the Compassion Fatigue Scale, the Maslach Burnout Inventory General Survey (MBI‐GS), the Emotional Labour Scale and the Social Support Rate Scale (SSRS). All questionnaires were reviewed by five professors (three professors in the field of obstetrics and gynaecology, one professor in psychological care and one professor in care management) in the field and then used. 2.2.1 Socio‐demographic characteristics Self‐designed after a pre‐review of the literature. This includes age, marital status, the only‐child, number of children, education level, work experience, professional title, employment status, night shift, average weekly hours and physical condition. 2.2.2 Chinese version of the Compassion Fatigue Scale The Professional Quality of Life Scale (ProQOL) was revised by Stamm to form the Chinese version of the Compassion Fatigue Scale, which was used in this study. The scale includes three dimensions: compassion satisfaction, burnout and secondary traumatic stress, each with 10 entries, for a total of 30 entries. The scale is based on Likert 5‐point scale, with the frequency of occurrence ranging from ‘none’ to ‘always’, and the reverse scoring method is used for items 14, 15, 17 and 29. The total score for each of the three dimensions is 50, and the threshold values are <37, >27 and >17 respectively. The total score of one dimension exceeded the threshold value for mild empathy fatigue, two dimensions exceeded the threshold value for moderate compassion fatigue and all three dimensions exceeded the threshold value for high compassion fatigue. In this study, the sum of the scores of the two dimensions was used as the compassion fatigue score. The total Cronbach's alpha coefficient of the scale in this study was 0.821, the Cronbach's alpha coefficient of compassion fatigue was 0.820 and the Cronbach's alpha coefficient of compassion satisfaction was 0.882. 2.2.3 Maslach Burnout Inventory General Survey The MBI‐GS scale formulated was used, which includes 15 items (Maslach et al., ). Scores range from ‘never (0)’ to ‘very frequently (6)’. The scale is divided into three dimensions: cynicism, emotional exhaustion and lack of professional efficacy. Cynicism and emotional exhaustion are positive scores, that is, the higher the score, the more serious the degree of job burnout. However, the lack of professional efficacy dimension is scored in reverse, that is, the lower the score, the more obvious the lack of professional efficacy. And the sum of the scores of the two dimensions of cynicism and emotional exhaustion was used as the job burnout score. The total Cronbach's alpha coefficient of the scale in this study was 0.885, the Cronbach's alpha coefficient of the total score of the two dimensions was 0.943 and the Cronbach's alpha coefficient of the low achievement dimension was 0.902. 2.2.4 Emotional Labour Scale The Chinese Emotional Labour Scale for Nurses compiled by Grandey was used, which has sub‐categories for surface acting (seven items), emotional expression requirements (four items) and deep acting (three items). Each item was measured using a 6‐point Likert scale from 1 point (‘strongly disagree’) to 6 points (‘strongly agree’). The total score ranges from 14 to 84, with higher scores indicating higher levels of emotional labour. In this study, the total Cronbach's α coefficient of the scale was 0.870. 2.2.5 Social Support Rate Scale The SSRS was originally developed by Xiao Shuiyuan (Yu et al., ), including subjective support, objective support and support utilization, with a total of 10 entries, of which entries 1–4 and 8–10 are single‐choice questions, each entry has four options, and the first, second, third and fourth answers are scored 1, 2, 3 and 4 respectively; entry 5 has five options, A, B, C, D and E, and each option is scored from ‘none’ to ‘fully support’. Each option from ‘none’ to ‘fully support’ will be scored from 1 to 4 points, and the score of the entry will be the sum of the scores of each option; entries 6 and 7 will be scored 0 points if you answer ‘no source’, and 0 points if you choose from ‘the following sources’. If you choose from the ‘following sources’, you will be given several points. The total score of the scale ranged from 12 to 66, and the higher the total score, the more social support was received. The total Cronbach's alpha coefficient for the scale in this study was 0.815. Socio‐demographic characteristics Self‐designed after a pre‐review of the literature. This includes age, marital status, the only‐child, number of children, education level, work experience, professional title, employment status, night shift, average weekly hours and physical condition. Chinese version of the Compassion Fatigue Scale The Professional Quality of Life Scale (ProQOL) was revised by Stamm to form the Chinese version of the Compassion Fatigue Scale, which was used in this study. The scale includes three dimensions: compassion satisfaction, burnout and secondary traumatic stress, each with 10 entries, for a total of 30 entries. The scale is based on Likert 5‐point scale, with the frequency of occurrence ranging from ‘none’ to ‘always’, and the reverse scoring method is used for items 14, 15, 17 and 29. The total score for each of the three dimensions is 50, and the threshold values are <37, >27 and >17 respectively. The total score of one dimension exceeded the threshold value for mild empathy fatigue, two dimensions exceeded the threshold value for moderate compassion fatigue and all three dimensions exceeded the threshold value for high compassion fatigue. In this study, the sum of the scores of the two dimensions was used as the compassion fatigue score. The total Cronbach's alpha coefficient of the scale in this study was 0.821, the Cronbach's alpha coefficient of compassion fatigue was 0.820 and the Cronbach's alpha coefficient of compassion satisfaction was 0.882. Maslach Burnout Inventory General Survey The MBI‐GS scale formulated was used, which includes 15 items (Maslach et al., ). Scores range from ‘never (0)’ to ‘very frequently (6)’. The scale is divided into three dimensions: cynicism, emotional exhaustion and lack of professional efficacy. Cynicism and emotional exhaustion are positive scores, that is, the higher the score, the more serious the degree of job burnout. However, the lack of professional efficacy dimension is scored in reverse, that is, the lower the score, the more obvious the lack of professional efficacy. And the sum of the scores of the two dimensions of cynicism and emotional exhaustion was used as the job burnout score. The total Cronbach's alpha coefficient of the scale in this study was 0.885, the Cronbach's alpha coefficient of the total score of the two dimensions was 0.943 and the Cronbach's alpha coefficient of the low achievement dimension was 0.902. Emotional Labour Scale The Chinese Emotional Labour Scale for Nurses compiled by Grandey was used, which has sub‐categories for surface acting (seven items), emotional expression requirements (four items) and deep acting (three items). Each item was measured using a 6‐point Likert scale from 1 point (‘strongly disagree’) to 6 points (‘strongly agree’). The total score ranges from 14 to 84, with higher scores indicating higher levels of emotional labour. In this study, the total Cronbach's α coefficient of the scale was 0.870. Social Support Rate Scale The SSRS was originally developed by Xiao Shuiyuan (Yu et al., ), including subjective support, objective support and support utilization, with a total of 10 entries, of which entries 1–4 and 8–10 are single‐choice questions, each entry has four options, and the first, second, third and fourth answers are scored 1, 2, 3 and 4 respectively; entry 5 has five options, A, B, C, D and E, and each option is scored from ‘none’ to ‘fully support’. Each option from ‘none’ to ‘fully support’ will be scored from 1 to 4 points, and the score of the entry will be the sum of the scores of each option; entries 6 and 7 will be scored 0 points if you answer ‘no source’, and 0 points if you choose from ‘the following sources’. If you choose from the ‘following sources’, you will be given several points. The total score of the scale ranged from 12 to 66, and the higher the total score, the more social support was received. The total Cronbach's alpha coefficient for the scale in this study was 0.815. Sampling and recruitment This is a cross‐sectional study, using convenience sampling, in which obstetrics and gynaecology nurses from January to February 2022 from five tertiary care hospitals in ‘XX’ were selected for recruitment. We collected data through a mobile phone questionnaire star mini programme. After the questionnaire was created, the mini programme generated a two‐dimensional code, and the investigators asked participants to carefully review the informed consent form and then fill out the questionnaire anonymously. Sample size and power Sample size calculation formula: N = [( t α/2 + t β ) S / δ ] 2 . Interpretation: α = 0.05, β = 0.10, power (1− β ) = 0.90. t α,∞ = t 0.05, ∞ = 1.96; t β,∞ = t 0.10, ∞ = 1.645. S is the standard deviation obtained from the pre‐experiment. δ is the allowable error, which is set by 0.25 times or 0.50 times the standard deviation according to the literature for cases where the allowable error level is not given in a professional sense. N is the sample size, and 208 samples are obtained by calculation. If a 20% error rate is set, 250 are obtained. Quality appraisal Design: The study participants were selected according to the inclusion and exclusion criteria, exclusion bias was controlled, the purpose of the study was informed and consent was sought from the study participants to ensure the quality of the survey. Implementation: A uniform guideline was used to inform the survey about the entries of the questionnaire and the precautions for filling it out, so that the study participants could obtain cooperation. If there are any questions, the researcher or investigator promptly answers them and provides objective guidance to fill them out, requiring the survey participants to fill them out anonymously and independently, so as to control confounding bias. Data collation and analysis: After data collection, the investigator checked and accepted the returned questionnaires one by one, eliminating invalid questionnaires such as missing items ≥5%, misfilled, regular responses and identical questionnaires. The data entry was done by two‐person double‐computer entry method, and the data were compared item by item to ensure the accuracy of the data before statistical analysis. According to the nature of the variables and the purpose of the study, appropriate statistical analysis methods were selected to ensure the reliability of the study results. Population and sample There are 11 public hospitals in ‘XX’, of which five tertiary hospitals containing obstetrics and gynaecology departments (including four grade A hospitals and one grade B hospital), with an estimated overall number of nurses 444. In this study, obstetrics and gynaecology nurses in public tertiary hospitals containing obstetrics and gynaecology departments in ‘XX’ area were studied as a whole, and a total of 329 cases were investigated, with a valid sample of 311 cases. A convenience sampling method was used, and according to the formula, the minimum sample size was 250, so this sample of 311 cases could represent the obstetrics and gynaecology nurses in the whole ‘XX’ tertiary hospitals. Inclusion and/or exclusion criteria The inclusion criteria were as follows: (1) working registered nurses (midwives should hold a maternal and child health certificate); (2) more than 1 year of work experience. Intern nurses, nurses who were studying, nurses on rotation or nurses who were on leave for various reasons during the survey period were excluded from the study. Data analysis The data were checked by Excel 2019 and analysed by SPSS 24.0. Categorical variables were expressed as frequency and percentage, continuous variables were described by mean ± standard deviation. Demographic data were analysed by univariate analysis, including independent samples T test, one‐way ANOVA test and Kruskal–Wallis test. Pearson's correlation analysis was used to access the relationships between the two variables and Spearman's correlation analysis was used when the data did not conform to normal distribution. The influencing factors of variables were evaluated by stepwise multiple linear regression analysis. Harman's single factor analysis was performed to test the degree of variation. Meanwhile, model 4 and model 8 in the Process macro of SPSS software were conducted to analyse the mediating effect, a level of p < 0.05 was accepted as statistically significant difference. Bootstrap procedure (5000 duplicate samples) was performed to test the significance of the mediating effect and 95% confidence interval (CI) without zero indicates a significant indirect effect. Ethical considerations This study was approved by the Ethics Committee of ‘XX’ (REDACTED). All participants provided informed consent and voluntarily participated in the study, which was conducted anonymously. Their information was confidential. All information collected was kept by the investigator, and only the investigator had access to the survey information. All methods used in this study were in accordance with the principles of the Institutional Research Committee and the Declaration of Helsinki. RESULTS 3.1 Current situation of compassion fatigue and compassion satisfaction in gynaecology and obstetrics nurses with different characteristics In this study, 311 valid questionnaires were returned, with an effective rate of 94.5%. There were 75 (24.12%) normal and mild compassion fatigue, 148 (47.59%) moderate and 88 (28.30%) high compassion fatigue among obstetrics and gynaecology nurses; there were 42 (13.50%) low compassion satisfaction, 248 (79.74%) moderate and 21 (6.75%) high compassion satisfaction among obstetrics and gynaecology nurses. The analysis of general information of obstetrics and gynaecology nurses is shown in Table . 3.2 Survey respondents' scores on each scale Table showed that obstetrics and gynaecology nurses had moderate to high levels of compassion fatigue and moderate level of compassion satisfaction. And of the three dimensions of emotional labour, the surface acting played the highest score and dominates. 3.3 Correlational analysis From Table , compassion satisfaction was negatively associated with compassion fatigue ( p < 0.01); emotional exhaustion, cynicism, lack of professional efficacy and emotional labour were positively associated with compassion fatigue ( p < 0.01) and social support was negatively associated with compassion fatigue ( p < 0.01). 3.4 Stepwise multiple linear regression analysis of compassion fatigue and compassion satisfaction of nurses in obstetrics and gynaecology A stepwise multiple linear regression analysis was conducted with compassion satisfaction and compassion fatigue as dependent variables, and meaningful general demographic data in univariate analysis, cynicism score, emotional exhaustion score, lack of professional efficacy score, emotional labour score and social support score as independent variables. According to the results (Table ), significant predictors of compassion satisfaction were lack of professional efficacy, cynicism, social support, work experience, employment status and night shift ( p < 0.01); significant predictors of compassion fatigue were physical condition, number of children, emotional labour, lack of professional efficacy, emotional exhaustion and the none‐only‐child ( p < 0.05). 3.5 Common method deviation test Since the data for this study were obtained from self‐report, common method bias may exist. We used Harman's single factor method to test deviation. Results showed 15 factors with characteristic root greater than ‘1’ and the variance contribution rate of the first factor without rotation was 22.86%, indicating that there was no serious common method deviation in this study. 3.6 Mediating effect analysis According to conservation of resources theory, when individuals have insufficient internal resources, they will look for supportive resources in the work environment to supplement the lost internal resources. The social support perceived by individuals is a typical supportive resource, which helps individuals regulate the relationship between stress and physical and mental health, and it has a facilitating effect on the formation of psychological resources, thus helping to alleviate compassion fatigue. Therefore, this study used social support as a mediating variable and confirmed the mediating role of social support between lack of professional efficacy and compassion fatigue using Model 4 in the Process macro. Results of the mediation effect analysis have been presented in Table and Figure . The total effect of lack of professional efficacy on compassion fatigue was significant ( ß = 0.147, 95% CI [0.042, 0.252]); the direct effect of lack of professional efficacy on social support and social support on compassion fatigue were also significant. Furthermore, the direct effect of lack of professional efficacy on compassion fatigue was significant after adjusting for social support ( ß = 0.112, 95% CI [0.006, 0.219]), suggesting that social support partially mediates the relationship between lack of professional efficacy and compassion fatigue. That is, social support can effectively mitigate the exacerbation of lack of professional efficacy on compassion fatigue. Current situation of compassion fatigue and compassion satisfaction in gynaecology and obstetrics nurses with different characteristics In this study, 311 valid questionnaires were returned, with an effective rate of 94.5%. There were 75 (24.12%) normal and mild compassion fatigue, 148 (47.59%) moderate and 88 (28.30%) high compassion fatigue among obstetrics and gynaecology nurses; there were 42 (13.50%) low compassion satisfaction, 248 (79.74%) moderate and 21 (6.75%) high compassion satisfaction among obstetrics and gynaecology nurses. The analysis of general information of obstetrics and gynaecology nurses is shown in Table . Survey respondents' scores on each scale Table showed that obstetrics and gynaecology nurses had moderate to high levels of compassion fatigue and moderate level of compassion satisfaction. And of the three dimensions of emotional labour, the surface acting played the highest score and dominates. Correlational analysis From Table , compassion satisfaction was negatively associated with compassion fatigue ( p < 0.01); emotional exhaustion, cynicism, lack of professional efficacy and emotional labour were positively associated with compassion fatigue ( p < 0.01) and social support was negatively associated with compassion fatigue ( p < 0.01). Stepwise multiple linear regression analysis of compassion fatigue and compassion satisfaction of nurses in obstetrics and gynaecology A stepwise multiple linear regression analysis was conducted with compassion satisfaction and compassion fatigue as dependent variables, and meaningful general demographic data in univariate analysis, cynicism score, emotional exhaustion score, lack of professional efficacy score, emotional labour score and social support score as independent variables. According to the results (Table ), significant predictors of compassion satisfaction were lack of professional efficacy, cynicism, social support, work experience, employment status and night shift ( p < 0.01); significant predictors of compassion fatigue were physical condition, number of children, emotional labour, lack of professional efficacy, emotional exhaustion and the none‐only‐child ( p < 0.05). Common method deviation test Since the data for this study were obtained from self‐report, common method bias may exist. We used Harman's single factor method to test deviation. Results showed 15 factors with characteristic root greater than ‘1’ and the variance contribution rate of the first factor without rotation was 22.86%, indicating that there was no serious common method deviation in this study. Mediating effect analysis According to conservation of resources theory, when individuals have insufficient internal resources, they will look for supportive resources in the work environment to supplement the lost internal resources. The social support perceived by individuals is a typical supportive resource, which helps individuals regulate the relationship between stress and physical and mental health, and it has a facilitating effect on the formation of psychological resources, thus helping to alleviate compassion fatigue. Therefore, this study used social support as a mediating variable and confirmed the mediating role of social support between lack of professional efficacy and compassion fatigue using Model 4 in the Process macro. Results of the mediation effect analysis have been presented in Table and Figure . The total effect of lack of professional efficacy on compassion fatigue was significant ( ß = 0.147, 95% CI [0.042, 0.252]); the direct effect of lack of professional efficacy on social support and social support on compassion fatigue were also significant. Furthermore, the direct effect of lack of professional efficacy on compassion fatigue was significant after adjusting for social support ( ß = 0.112, 95% CI [0.006, 0.219]), suggesting that social support partially mediates the relationship between lack of professional efficacy and compassion fatigue. That is, social support can effectively mitigate the exacerbation of lack of professional efficacy on compassion fatigue. DISCUSSION In our study, we surveyed obstetrics and gynaecology nurses in different tertiary hospitals in ‘XX’ to find out the compassion fatigue and compassion satisfaction of obstetrics and gynaecology nurses. Also, compassion fatigue was determined by job burnout and secondary traumatic stress, as these are the variables used in the survey instrument. According to our data, 75.88% of obstetrics and gynaecology nurses reported moderate to high levels of compassion fatigue. Only 6.75% of obstetrics and gynaecology nurses reported high levels of compassion satisfaction. Compared to oncology nurses (Xie et al., ), emergency nurses (O'Callaghan et al., ) and haematology cancer nurses (Chen et al., ), obstetrics and gynaecology nurses in this study had lower levels of compassion satisfaction. And the level of compassion fatigue among nurses in the context of maternal and perinatal deaths was comparable to this study (Mashego et al., ). All of the above studies used ProQOL Version 5, as did our study. These differences may be related to differences in personal environment and work environment (Stamm, ). According to our findings, the nurse's personal environment is a factor that influences compassion fatigue, including physical condition and the number of children. This article showed that the poorer the physical condition of nurses, the higher the level of compassion fatigue, which was consistent with the previous study (Qu et al., ). The body is the source of energy, and when individuals are in poor health, their resource balance is disrupted, their compassion decreases and compassion fatigue occurs in severe cases (Hobfoll & Wells, ). In addition, the number of children were associated with compassion fatigue. The number of children of nurses is an important factor affecting their quality of life and work (Jarrad & Hammad, ). In this study, 87.5% of the nurses were in their young adulthood, taking on various roles as mothers, daughters and wives in their lives, making family–work conflicts inevitable. Given this, we hypothesized that when nurses are faced with a larger number of people to care for, work and family are prone to conflict, which constitutes a risk factor for compassion fatigue. The nurse's work environment is a factor that influences compassion satisfaction, including work experience, and night shift. First, compassion satisfaction was higher among nurses with <4 years of experience and more than 16 years of experience. Nurses with <4 years of experience are new to the profession, have light family responsibilities and full work ambitions; nurses with more than 16 years of experience are more competent and mature in their thinking (Alharbi et al., ). In contrast, the lower compassion satisfaction of nurses with 4–16 years may be related to their inability to reconcile family and work. What is more, night shift work was associated with high levels of burnout and secondary traumatic stress. A study of Chinese midwives working in the delivery room showed that night shift work increase their level of compassion fatigue (Qu et al., ). Other study found that night shift work resulted in lower levels of physical and mental health in obstetrics and gynaecology nurses (Coetzee & Klopper, ). Night shift work has an irregular schedule, leading to the onset of lower compassion satisfaction. In response to the above factors, nursing managers should use flexible scheduling and pay more attention to the emotional status of nurses with 4–16 years of experience. As revealed in our study, compassion fatigue was higher in nurses with high emotional labour and compassion satisfaction was higher in nurses with high social support. Emotional labour is work that requires individuals to control their emotions in order to achieve desired outcomes, and is usually associated with negative outcomes (Hwang et al., ). Continuous and regular emotional labour can lead to burnout or compassion fatigue and reduce the quality of life and work‐related care of nurses (Kwak et al., ). The results of our study also showed such results. The negative effects of nurses' emotional labour are an important factor affecting patient service delivery (Kim, ). Similar to the results of some studies (Hunsaker et al., ; Yu et al., ), we found that social support was a protective factor for compassion satisfaction. Social support facilitates physical and mental health, and promotes the formation of psychological resources, thus contributing to the improvement of compassion satisfaction (Park et al., ). Studies have shown that social support can reduce the occurrence of compassion fatigue in nurses and that recognition and support from leaders and colleagues were the main sources of social support that can effectively improve nurses' compassion (Kelly & Lefton, ). Alternatively, a good work environment (e.g. peer or social support, recognition of professional values, manageable workload) increased nurses' job satisfaction, which led them to be more proactive in their work and increased compassion satisfaction (Qu et al., ). In addition, lack of professional efficacy was a predictor of both compassion satisfaction and compassion fatigue. It has been found that lack of professional efficacy led to high compassion fatigue and low compassion satisfaction, and can also affect an individual's productivity and sense of accomplishment at work (Fan & Lin, ; Koutra et al., ). Specifically, individuals who lack professional efficacy have a lower recognition of themselves and are always in a negative state, resulting in lower compassion ability. We were surprised to find that lack of professional efficacy can influence compassion fatigue and compassion satisfaction through social support. Studies have demonstrated that lack of professional efficacy negatively predicted social support, while social support was a protective factor for compassion satisfaction against compassion fatigue, supporting existing theoretical perspectives and empirical studies (Hunsaker et al., ; Ye et al., ). For individuals, social support was an important form of resource that provided nurses with emotional support and affirmation of self‐worth (Park et al., ). According to our mediation analysis, social support is a critical intermediary between lack of professional efficacy and compassion fatigue/compassion satisfaction. Social support can buffer and compensate for the loss of resources due to lack of professional efficacy, and reduce the incidence of compassion fatigue, and increase nurses' compassion satisfaction. In summary, social support acted as a ‘bridge’ between lack of professional efficacy and compassion fatigue/compassion satisfaction. Therefore, nursing managers can provide an external resource (e.g. social support) for obstetrics and gynaecology nurses to better retain a compassionate and dedicated obstetrics and gynaecology nurse workforce based on the findings. 4.1 Strength and limitations of the work The research topic is relatively new. Compassion fatigue among obstetrics and gynaecology nurses in ‘XX’ provincial tertiary hospitals is hardly a focus; the impact of ‘XX’ comprehensive two‐child policy on compassion fatigue among obstetrics and gynaecology nurses also opens up a new area of research; this may raise concerns about the occupational health of obstetrics and gynaecology nurses in ‘XX’ and motivate the government to increase the training of related professionals. Limitations of this study include the cross‐sectional survey was conducted in XX and most participants were from tertiary care hospitals, which may limit the generalizability of the results; this subject group captured the views of participants at a specific time without follow‐up and the results only reflect what participants really thought at that time; self‐report bias is an inherent limitation of the study design. Finally, due to the lack of research in this area, this article was only a preliminary study of the current situation, with the hope of conducting more in‐depth research, such as interviews and consultations with professionals. 4.2 Recommendations for further research It is recommended that subsequent studies will focus on obstetrics and gynaecology nurses who have been working for 4–16 years and may incorporate semi‐structure interviews to further explore in depth more factors influencing compassion fatigue in obstetrics and gynaecology nurses; this study presents only a simple mediating model with moderation, and there are more potential mediators and moderators between these two variables that are worth exploring; appropriate interventions may also be developed based on the results obtained in this study. Strength and limitations of the work The research topic is relatively new. Compassion fatigue among obstetrics and gynaecology nurses in ‘XX’ provincial tertiary hospitals is hardly a focus; the impact of ‘XX’ comprehensive two‐child policy on compassion fatigue among obstetrics and gynaecology nurses also opens up a new area of research; this may raise concerns about the occupational health of obstetrics and gynaecology nurses in ‘XX’ and motivate the government to increase the training of related professionals. Limitations of this study include the cross‐sectional survey was conducted in XX and most participants were from tertiary care hospitals, which may limit the generalizability of the results; this subject group captured the views of participants at a specific time without follow‐up and the results only reflect what participants really thought at that time; self‐report bias is an inherent limitation of the study design. Finally, due to the lack of research in this area, this article was only a preliminary study of the current situation, with the hope of conducting more in‐depth research, such as interviews and consultations with professionals. Recommendations for further research It is recommended that subsequent studies will focus on obstetrics and gynaecology nurses who have been working for 4–16 years and may incorporate semi‐structure interviews to further explore in depth more factors influencing compassion fatigue in obstetrics and gynaecology nurses; this study presents only a simple mediating model with moderation, and there are more potential mediators and moderators between these two variables that are worth exploring; appropriate interventions may also be developed based on the results obtained in this study. CONCLUSION The study found that 75.88% of obstetrics and gynaecology nurses had moderate to high levels of compassion fatigue. Based on the results, it was found that among the personal factors of obstetrics and gynaecology nurses, physical condition and the number of children raised were influential factors closely related to compassion fatigue. Secondly, nurses with 4–16 years of work experience among the work environment factors were more likely to experience low satisfaction. What is more, nurses who lacked professional efficacy were more likely to experience compassion fatigue, and the mediated analysis revealed that compassion fatigue could be effectively reduced by obtaining social support. In response to these findings, nursing managers are advised to focus on caring for obstetrics and gynaecology nurses who are in poor health or have more children; to provide appropriate interventions to reduce the incidence of compassion fatigue for the nurses who have worked for 4–16 years and to provide more social support for nurses to achieve more satisfaction and happiness in their work. In this study, after identifying the influencing factors of compassion fatigue, we will develop appropriate interventions, such as positive stress reduction therapy, reflective debriefing and group drawing, to effectively prevent and reduce compassion fatigue among obstetrics and gynaecology nurses. Jia Wang and Mei Su contributed to the conceptualization of the study, performed the analysis, wrote the manuscript; Wenzhong Chang, Yuchong Hu and Peijuan Tang contributed significantly to investigation and project administration; Yujia Ma assisted with data curation; Jiaxin Sun contributed to the conceptualization of the study and reviewed the manuscript. All authors have read and approved the manuscript. This study was supported by the Inner Mongolia Science and Technology Planning Project Fund (2020GG011). The authors declare that they have no conflict of interests. |
Evaluating accuracy and reproducibility of ChatGPT responses to patient-based questions in Ophthalmology: An observational study | 69cadc4d-4190-40d0-8262-2e755545f7a8 | 11315477 | Ophthalmology[mh] | Artificial intelligence (AI) is a simulation of human intelligence that thinks and learns like a human. AI has grown significantly in recent years and become integrated into numerous products and services for which individuals have been increasingly relying on it. In November 2022, Chat Generative Pre-Trained Transformer (ChatGPT) was developed by OpenAI. ChatGPT is an AI-based language model developed to generate human-like text responses mimicking a conversation, making it suitable for various aspects of life. The particular reason why many individuals rely on ChatGPT is because it is accessible, easy to use, and informative, especially for medical concerns. Although ChatGPT has brought significant benefits to people’s lives, the increased reliance on it brings about several consequences. Specifically, as patients turn to these AI applications for guidance on their healthcare needs, the accuracy and reliability of the information provided become critical. Recent studies examined ChatGPT applications in healthcare settings. Yeo et. al. examined ChatGPT’s responses in cirrhosis and hepatocellular carcinoma which revealed response accuracy rates of 79.1% and 74.0% for cirrhosis and hepatocellular carcinoma respectively, suggesting its role more as a supplementary tool than a primary healthcare resource. A similar role was demonstrated with patients with diabetes using ChatGPT as an educational tool for diabetes-related queries. For example, chronic conditions that are generally stable and require ongoing management might be well-suited to the knowledge ChatGPT is trained on. In contrast, diseases with more nuanced or unpredictable outcomes may present greater challenges for accurate AI-guided advice. Many studies have evaluated ChatGPT responses in ophthalmology. Antaki et. al. used two 260-question simulated exams from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and the OphthoQuestions online question bank to compare the accuracy of responses generated by different versions of ChatGPT (3.5). They found that the legacy model achieved 55.8% accuracy on the BCSC set and 42.7% on the OphthoQuestions set. With ChatGPT Plus, accuracy increased to 59.4% and 49.2%, respectively. On the other hand, Bernstein et al used patients’ questions that were answered by the American Academy of Ophthalmology (AAO)-affiliated ophthalmologists to evaluate the quality of ophthalmology advice generated by an LLM chatbot in comparison with ophthalmologist-written advice, with the assistance of a masked panel of 8 board-certified ophthalmologists. The average ratio of accuracy for distinguishing between AI and human responses was 61.3%. Of 800 evaluations of chatbot-written answers, only 168 answers (21.0%) were marked as human-written, while 517 of 800 human-written answers (64.6%) were marked as AI-written, which shows the high ability of ChatGPT (version 3.5) to simulate the human texts written by ophthalmologists. However, data on ChatGPT accuracy and reproducibility in patient-based questions are lacking. Therefore, this study aims to measure the accuracy of ChatGPT responses to patients’ concerns about ophthalmology conditions. Given the specialized nature of ophthalmology, which often involves a complex interplay of symptoms, treatments, and patient lifestyle factors, this study seeks to understand how well ChatGPT can handle such specific medical inquiries using questions obtained from “Ask an ophthalmologist” page of the AAO. Institutional review board approval was not required for this type of articles. 2.1. Question curation/data source Questions were first obtained from the “Ask an ophthalmologist” page of the AAO Then, 2 authors selected, reviewed, and approved the questions to evaluate their inclusion in the study. Questions were then further evaluated, to exclude duplicates and irrelevant questions. Questions of a general nature that necessitated subjective or personalized responses were likewise excluded. Some questions were grammatically edited to ensure comprehensibility. To conduct statistical analysis, questions were categorized into 7 ophthalmology-related categories to assess ChatGPT’s performance efficiently: (1) glaucoma; (2) cataract; (3) infectious disorders; (4) astigmatism; (5) retinal disorders; (6) LASIK and laser procedures; (7) strabismus and amblyopia. Finally, 115 questions were used to generate responses from ChatGPT. 2.2. Response generation Each question was prompted to ChatGPT (3.5 version) twice. Each entry was on separate occasions using the “new chat” function with the goal of generating 2 responses per question. This was done to determine the reproducibility of responses to the same question. 2.3. Question grading Responses to questions were first independently graded for accuracy and reproducibility by 2 board-certified ophthalmologist reviewers. Reviewers were instructed to grade the accuracy of responses based on known information leading up to 2021. Reproducibility was graded based on the similarity in accuracy of the 2 responses per question generated by ChatGPT. If the responses were similar, direct measurement of the similar grade was obtained. If the responses were not similar, the first response was utilized for grading. In this way, we obtain both accuracy and reproducibility. Accuracy of each response was graded with the following scale: Comprehensive: Defined as accurate and comprehensive, nothing more a board-certified ophthalmologist can add if asked this question by a patient. Correct but inadequate: All information is correct but incomplete; a board-certified ophthalmologist would have more important information to add if asked this question by a patient. Some correct and some incorrect. Completely incorrect. Disagreement in reproducibility or grading of each response was resolved by a meeting to reach consensus between the 2 board-certified ophthalmologist reviewers. The final grades were then compiled and used to analyze the overall performance of ChatGPT in answering questions related to ophthalmology. 2.4. Statistical analysis Extracted data were entered into a spreadsheet. Statistical analysis was performed using the IBM SPSS statistical package for Windows v.26 (Armonk, NY). Data was expressed as frequency (percentage) for nominal data. Proportions of responses earning each grade were calculated. To determine reproducibility, responses were categorized into 2 groups: a grade of 1 and 2 comprised the first group, and a grade of 3 and 4 comprised the second group. The 2 responses to each question were considered significantly different from one another, or not reproducible if the assigned grades for each response fell under different groups. Questions were first obtained from the “Ask an ophthalmologist” page of the AAO Then, 2 authors selected, reviewed, and approved the questions to evaluate their inclusion in the study. Questions were then further evaluated, to exclude duplicates and irrelevant questions. Questions of a general nature that necessitated subjective or personalized responses were likewise excluded. Some questions were grammatically edited to ensure comprehensibility. To conduct statistical analysis, questions were categorized into 7 ophthalmology-related categories to assess ChatGPT’s performance efficiently: (1) glaucoma; (2) cataract; (3) infectious disorders; (4) astigmatism; (5) retinal disorders; (6) LASIK and laser procedures; (7) strabismus and amblyopia. Finally, 115 questions were used to generate responses from ChatGPT. Each question was prompted to ChatGPT (3.5 version) twice. Each entry was on separate occasions using the “new chat” function with the goal of generating 2 responses per question. This was done to determine the reproducibility of responses to the same question. Responses to questions were first independently graded for accuracy and reproducibility by 2 board-certified ophthalmologist reviewers. Reviewers were instructed to grade the accuracy of responses based on known information leading up to 2021. Reproducibility was graded based on the similarity in accuracy of the 2 responses per question generated by ChatGPT. If the responses were similar, direct measurement of the similar grade was obtained. If the responses were not similar, the first response was utilized for grading. In this way, we obtain both accuracy and reproducibility. Accuracy of each response was graded with the following scale: Comprehensive: Defined as accurate and comprehensive, nothing more a board-certified ophthalmologist can add if asked this question by a patient. Correct but inadequate: All information is correct but incomplete; a board-certified ophthalmologist would have more important information to add if asked this question by a patient. Some correct and some incorrect. Completely incorrect. Disagreement in reproducibility or grading of each response was resolved by a meeting to reach consensus between the 2 board-certified ophthalmologist reviewers. The final grades were then compiled and used to analyze the overall performance of ChatGPT in answering questions related to ophthalmology. Extracted data were entered into a spreadsheet. Statistical analysis was performed using the IBM SPSS statistical package for Windows v.26 (Armonk, NY). Data was expressed as frequency (percentage) for nominal data. Proportions of responses earning each grade were calculated. To determine reproducibility, responses were categorized into 2 groups: a grade of 1 and 2 comprised the first group, and a grade of 3 and 4 comprised the second group. The 2 responses to each question were considered significantly different from one another, or not reproducible if the assigned grades for each response fell under different groups. Totally, 115 questions were inputted into ChatGPT (see File S1, Supplemental Digital Content, http://links.lww.com/MD/N283 ). ChatGPT provided “comprehensive” responses to 70/117 (59.8%) of questions. In relation to categories, the model provided “comprehensive” responses to 64.7% of questions related to “Glaucoma,” 60% of questions related to “Cataract,” 73.3% of questions related to “infectious disorders and conjunctivitis,” 57.9% to questions about “Astigmatism,” 60% to questions about “LASIK and Laser procedures,” and 70% to questions related to “Amblyopia and strabismus” (Table ). On the other hand, the percentage of comprehensive responses provided to questions related to retinal diseases was the lowest, with only 50% of questions being answered comprehensively, and 30.5% of questions being answered by responses graded as “Correct but incomplete.” Overall, only one question under the “retinal diseases” category was provided with a “completely incorrect” response, which is “Does drinking water eliminate flashes?.” In relation to reproducibility, it was defined as no difference in grading categories (1 and 2 vs 3 and 4) between the 2 responses for each question. ChatGPT provided reproducible responses to 91.5% of questions. Responses were reproducible to 100% of questions under the categories “Cataract” and “LASIK & Laser procedures,” while it was lower in other categories (Table ). In this study, we evaluated the accuracy and reproducibility of ChatGPT responses to patients’ concerns using patients-written questions from the AAO. We found that ChatGPT responded comprehensively to 59.8% of the questions, with a reproducibility rate of 91.5 %. These findings suggest a high accuracy and reproducibility to patients’ questions in ophthalmology. ChatGPT is an AI-based chatbot developed to generate human-like text responses and is trained on a large database of information from a wide range of sources including online websites, books, and articles leading up to 2021. The model was brought into the limelight because it made the process of interacting with AI simple, accessible, and free. It can be used to answer questions, hold conversations, improve, or review academic writing, and develop study plans. It is found to have a potential use of assisting decision-makers in healthcare in summarizing relevant guidelines and treatment options with potential benefits, side effects, and drug interactions. ChatGPT has its own limitations, for example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly. More studies need to be conducted to fully understand how to navigate ChatGPT’s strengths and limitations in different medical specialties. Our findings align with the existing literature about ChatGPT performance in ophthalmology. Antaki et.al tested 2 versions of ChatGPT (based on version 3.5) on 2 question banks related to board examination in ophthalmology, and showed that legacy model achieved an accuracy of 55.8% on the BCSC set and 42.7% on the OphthoQuestions set, whereas ChatGPT Plus achieved 59.4% correct response rate on the BCSC set and 49.2% on the OphthoQuestions. In addition, Taloni et al found that ChatGPT 4.0 answered correctly 82.4% of the questions in the self-assessment program of American Academy of Ophthalmology, which was higher than human scores (75.7%). It was also shown that ChatGPT performs accurately when responding to questions about orbital and oculofacial disorders, with an average appropriateness score of 5.3/6.0 (“mostly appropriate” to “completely appropriate”). Our study found that ChatGPT scored best in the infectious disorders section (73.3%), and poorest in the retinal disorders section (50%). Antaki et al showed that the legacy model performed best in general medicine (75%), fundamentals (60%), and cornea (60%), but as well in glaucoma, (37.5%), and pediatrics and strabismus (42.5%), and neuro-ophthalmology (25%), which is in contradiction to Madadi et al findings that showed the potential to diagnose cases related to neuro-ophthalmology with comparable accuracy to certified neuro-ophthalmologist, with estimated accuracy of 59% and 82% for ChatGPT 3.5 and ChatGPT 4.0, respectively. Furthermore, although ChatGPT Plus model showed the same strength in same subjects as the legacy model, it’s poorest section remained neuro-ophthalmology, in addition to oculo-plastics and clinical optics. We used patients-written questions to simulate the usage of ChatGPT 3.5 by patients when they seek information regarding their medical condition, as it’s availability will possibly make ChatGPT a popular source pf information among patients. Similarly, Bernstein et al (version 3.5) used 200 question from The Eye Care Forum of the AAO to evaluate the ability of ChatGPT to simulate the answers written by AAO-affiliated ophthalmologists. Their 8 members panel accurately distinguished between AI and human responses with average accuracy of 61.3%, where 21% of the 800 evaluations of chatbot-written answers were marked as human-written. In addition, they found that the possibility of ChatGPT to include incorrect or inappropriate content in its answers were comparable with human answers, which applies in similar way to the likelihood of harm and its extent. ChatGPT was also evaluated using patients-based questions in other medical domains. In bariatric surgery, Samaan et al found that 86.8% of ChatGPT responses to questions were “accurate and comprehensive, with a reproducibility rate of 90.7%. On the other hand, Yeo et al (using 3.5 version) found that only 79.1% of responses were correct and 47.3% were comprehensive on questions related to cirrhosis, compared to 74.0% correct and 41.1% comprehensive responses on questions on hepatocellular carcinoma. These findings suggest that ChatGPT can be a very effective source of information and an adjunct to the medical advice, but not a substitute. Given the fact that many patients seek health information from an online sources, it’s very likely that high proportions are using LLM chatbot models as a source for medical advice about their medical conditions. Many studies suggest that high levels of health literacy among patients is associated with better care and outcomes regarding their illnesses. It can do so by increasing disease awareness and compliance with their treatments, higher surveillance for some complications, and decreased medical expenses. Same goes for literacy regarding eye-related diseases. There are several ways patients can seek information about their illnesses. One common way is by using the search engines on the internet. They are informative to some extent, however, search results can be overwhelming and misleading, as they provide dozens of websites that are related to the question, but without a direct comprehensive answer. The other brand-new way of providing information is ChatGPT. It is a free tool that provides a potentially reliable and accurate health information in a smooth conversation with the patient. Since ChatGPT was launched in November 2022, its use by patients seeking health information is on the rise, since it has numerous advantages over the classic search engines. For instance, it provides you with the details of your illness in a comprehensive conversational dialogue, which can contribute to reduced health literacy, reduce unnecessary anxiety among patients regarding their illnesses. In addition, it is free-of-charge, which makes it accessible for patients with financial limitations, which are already more prone to poor health outcomes. Furthermore, it can help improve patient outcomes by providing personalized care plans based on individual needs. It can even show empathy in its responses to patients and their caregivers and offer feasible recommendations for better outcomes. On the other hand, ChatGPT has some drawbacks that make it less reliable. One is being trained on information only up until 2021, thereby providing some information that is outdated. The second is that the dataset it uses to produce answers in unknown, which may affect its reliability. However, ChatGPT is improving with time, and its drawbacks can be resolved later. As for our study, ChatGPT is a reliable source of information for ophthalmology patients. However, till the time of this article, it should not replace the seeking for professional medical advice. AI in healthcare is a promising technology with enhanced diagnostics, streamlined processes, and improved patient care. However, this technology is accompanied by certain ethical implications that demand important consideration. Privacy and data security are paramount concerns, necessitating robust anonymization techniques to protect patient data. Algorithmic bias poses a significant challenge, demanding diverse datasets and ongoing monitoring to ensure fairness. Transparency and explainability in AI decision-making processes enhance trust and accountability. Ethics must remain at the forefront in the ever-evolving realm of healthcare technology. By embracing these strategies and best practices, healthcare systems and professionals can harness the potential of AI, ensuring responsible and ethical integration that benefits patients while upholding the highest ethical standards. 4.1. Strengths and limitations To the best of our knowledge, this is one of the few studies to examine the utility of the model ChatGPT in the field of ophthalmology. Patient questions were obtained from a famous reliable source—the “Ask an ophthalmologist” page of the AAO to provide a comprehensive and realistic sample of patient questions. Responses to questions were first independently graded for accuracy and reproducibility by 2 board-certified ophthalmologist reviewers to comprehensively evaluate the accuracy and reproducibility of ChatGPT’s responses. However; our study has some limitations. Firstly, we have relatively used a smaller number of questions compared to previous studies, which may affect the way the results can show the effectivity of ChatGPT. Secondly, the source of information in which ChatGPT was trained on is unknown which may impact the reliability of its responses for certain topics. Third, medical guidelines and standards of practice differ from country to country according to the relevant medical society, which makes it difficult to generalize these results to each user in every country. lastly, we used one version of ChatGPT only (GPT 3.5), which has lower abilities than advanced versions, such as ChatGPT 4.0. We run the study on this version because it is the free version that is accessible for public. To the best of our knowledge, this is one of the few studies to examine the utility of the model ChatGPT in the field of ophthalmology. Patient questions were obtained from a famous reliable source—the “Ask an ophthalmologist” page of the AAO to provide a comprehensive and realistic sample of patient questions. Responses to questions were first independently graded for accuracy and reproducibility by 2 board-certified ophthalmologist reviewers to comprehensively evaluate the accuracy and reproducibility of ChatGPT’s responses. However; our study has some limitations. Firstly, we have relatively used a smaller number of questions compared to previous studies, which may affect the way the results can show the effectivity of ChatGPT. Secondly, the source of information in which ChatGPT was trained on is unknown which may impact the reliability of its responses for certain topics. Third, medical guidelines and standards of practice differ from country to country according to the relevant medical society, which makes it difficult to generalize these results to each user in every country. lastly, we used one version of ChatGPT only (GPT 3.5), which has lower abilities than advanced versions, such as ChatGPT 4.0. We run the study on this version because it is the free version that is accessible for public. The large language model ChatGPT provided relatively moderate accuracy and reproducibility responses to common questions related to ophthalmology. ChatGPT still is not reliable to obtain major healthcare information; rather; it is helpful as a supplementary healthcare information source. ChatGPT may serve as a helpful adjunct, but not an exclusive, source of information regarding eye-related diseases. We encourage future studies to examine how to utilize this technology to improve patient outcomes and quality of life. The main healthcare resources should not be replaced. Conceptualization: Asem A. Alqudah, Abdelwahab J. Aleshawi, Ibrahim Ayasrah, Yaqoot Ta'ani, Mohammad Al Salkhadi, Shaima'a Aljawarneh. Data curation: Asem A. Alqudah, Mohammed Baker, Ibrahim Ayasrah, Shaima'a Aljawarneh. Formal analysis: Mohammed Baker, Mohammad Al Salkhadi. Investigation: Asem A. Alqudah, Mohammed Baker, Zaina Alnajjar, Ibrahim Ayasrah, Mohammad Al Salkhadi. Methodology: Asem A. Alqudah, Mohammed Baker, Mohammad Al Salkhadi, Shaima'a Aljawarneh. Resources: Asem A. Alqudah, Mohammed Baker. Software: Asem A. Alqudah, Mohammed Baker, Ibrahim Ayasrah, Yaqoot Ta'ani. Supervision: Asem A. Alqudah, Zaina Alnajjar. Validation: Asem A. Alqudah, Abdelwahab J. Aleshawi, Mohammed Baker, Zaina Alnajjar, Ibrahim Ayasrah, Yaqoot Ta'ani, Mohammad Al Salkhadi, Shaima'a Aljawarneh. Visualization: Asem A. Alqudah, Abdelwahab J. Aleshawi, Mohammed Baker. Writing – original draft: Asem A. Alqudah, Abdelwahab J. Aleshawi, Zaina Alnajjar, Yaqoot Ta'ani, Shaima'a Aljawarneh. Writing – review & editing: Abdelwahab J. Aleshawi, Zaina Alnajjar, Ibrahim Ayasrah, Mohammad Al Salkhadi, Shaima'a Aljawarneh. |
Gender Disparities in Adverse Events Resulting From Low-Value Practices in Family Practice in Spain: A Retrospective Cohort Study | 172c2a76-c6e5-43b8-b163-5114f711551c | 11286494 | Family Medicine[mh] | Despite the rising costs in developed Western societies, patient outcomes remain suboptimal , and adverse events continue to pose a significant challenge across all healthcare systems . Due to its role in orchestrating patient flow within the healthcare system, primary care is pivotal in achieving favorable patient outcomes . Although less studied, one of the causes of adverse events in primary care is directly related to recommending, administering, or prescribing healthcare services that are unlikely to benefit patients , which we consider as overuse . The volume of patients subjected to low-value practices (LVPs) in the United States, Canada, Australia, and Sweden reach up to 80%, depending on the type of practice . In Primary Care in Spain , nearly 6 out of 10 adult patients and 4 out of 10 pediatric patients annually receive at least one prescription classified as overuse. Examining only the overage in expenses resulting from unnecessary prescriptions of benzodiazepines, NSAIDs, lipid-lowering agents, paracetamol, and ibuprofen within a single year, reveals an annually total surpassing 290 million euros. This constitutes 2.8% of the entire Spanish pharmaceutical expenditure in 2018 , accounting solely for the cost of the prescribed medications. The continued occurrence of overuse in primary care is frequently linked to various factors, including limited time, constrained access to comprehensive patient data, defensive medical practices, and the approval of prescription decisions either made by healthcare colleagues or requested by patients . Recent studies also highlight differences in the frequency of LVPs between male and female patients . Moreover, the number of adverse events due to overuse has been suggested higher in women . Although it is known that women are negatively affected by a gender bias in the therapeutic effort, and they experience greater delays in diagnosis , the male and female difference has not yet been investigated in relation to overuse, which means that interventions aimed at reducing it do not consider the differential impact on female patients, who could be particularly and negatively affected by its consequences. Therefore, the overarching aim of this research is to assess if there are differences among male and female patients treated by male or female family physicians with regard the occurrence of preventable adverse events due to LVPs in the primary care setting. In this study, we reach out to test the following hypotheses developed based on the results of previous studies within primary care . H 1 . A higher number of LVPs are identified among female patients compared to male patients within similar age groups and reasons for consultation. H 2 . Male and female family physicians are responsible of a similar number of LVPs among their patients. H 3 . A higher number of preventable adverse events related to LVPs are identified among female patients compared to male patients within similar age groups and reasons for consultation. H 4 . Male and female family physicians are involved in a similar number of preventable adverse events related to LVPs among their patients. H 5 . Preventable adverse events stemming from overuse, either due to conditions or symptoms more commonly found in patients of a specific sex or those attributed to gender-related reasons, exhibit similarity. Design A retrospective cohort study in which a random selection of patients attending primary care consultations in Alicante province (Spain) was performed. The STROBE checklist was used as a guide for reporting the study . The study protocol was published first . Primary Care in Spain Spanish primary care is a cornerstone of the country’s healthcare system, offering accessible, and comprehensive healthcare services to individuals who require ongoing medical attention, often due to chronic illnesses. This level of care ensures universal access to quality healthcare for individuals of all ages. Preventive care, early intervention, and continuity of care is provided by multidisciplinary teams of family physicians, pediatricians, nurses, and allied health professionals. Definitions In this study, overuse was defined as continuing to do what should not be done (e.g., ignoring the “Do Not Do” recommendations). The LVPs considered in the study were derived from the Spanish Commitment to Quality initiative list of recommendations , formulated according to the Choosing Wisely campaign’s methodology to mitigate overuse . Adverse event was defined as injury resulting from medical management or a complication, rather than the underlying disease, leading to extended hospitalization and/or disability at discharge from medical care . Gender bias in health refers to unjustifiable differences in treatment between women and men based on scientific evidence. This bias arises from assuming gender differences where there are none or ignoring genuine differences that necessitate a distinct approach according to evidence . Ethics The Research Ethic Board of the Sant Joan Hospital approved the study protocol reference 21/061. It was registered on ClinicalTrials.gov https://clinicaltrials.gov/study/NCT05233852 (NCT05233852). Procedure A group of reviewers (n = 40) was formed and trained in the study LVPs identification and data collection procedures. Training was provided using anonymized records. During the training, all reviewers assessed the same cases, and concordance was measured using Cohen’s kappa coefficient. A score of 0.63 or higher was deemed acceptable, while a score of 0.84 or higher was considered excellent. Training concluded once an excellent level of concordance was reached. The list of reviewers involved in this study is provided in . Each reviewer independently assessed selected medical records and recorded study data. Upon identifying an LVP, the reviewer evaluated potential adverse events and, if present, assessed severity and harm extent using the Woods et al scale, where higher scores indicate greater severity and a stronger relationship between the practice and harm. Events with scores above 3 were classified as adverse events, while those above 4 were attributed to LVPs. A blinded recording system was employed. Data Collection Data were extracted from the primary care electronic medical records database, Abucasis, between 15 March 2023 and 31 August 2023. In Alicante, as well as in the rest of Spain, all the information about a patient is registered in a unique electronic medical record. Data from medical records were collected using an electronic data collection platform, which incorporated a trigger tool to facilitate the identification and recording of adverse events. This tool, previously used in the SOBRINA study , was based on recommendations by Rosenberg et al . The LVPs considered in this study were agreed in a previous study . An online consensus technique involving 33 health professionals from family medicine, cardiology, intensive care, and geriatrics was conducted to reach a consensus on LVPs considering three aspects: 1) if it was still a relatively frequent LVP in primary care; 2) its frequency of application was different between men and women, with a probable association with sex or gender; and 3) if the LVP could cause a severe adverse event in the patient. Panelists marked their level of agreement/disagreement on a scale of 0 (strongly disagree) to 10 (strongly agree). The resulting score was the sum of the three scales. The LVPs that yielded a score of 20 points, or more were retained (consensus criterion) and those scoring under 10 points discarded. Then, a select group of panelists were asked to review the final list of LVPs. Additionally, during a session with experts (clinicians and gender bias in health researchers), there was a debate and consensus reached on whether the differences between men and women that could be observed in these previously selected LVPs should be attributed to the presence of gender inequalities in healthcare. In cases where treatment (or test) is indicated for a condition or symptoms that are more prevalent in patients of a specific sex, it was assumed that the risk of overtreatment (or overuse) in patients of that particular sex is higher than in the other. However, when there is no evidence that the symptoms or prevalence of the condition for which the treatment is provided differ between sex, it was assumed that differences in practice application are due to gender-related reasons. We used a scale ranging from −5, entirely attributable to belief on that are more prevalent in patients of a specific sex, to +5, entirely attributable to gender bias. shows the outcome of this consensus among experts. Sample The proportion of medical records with at least one LVP was expected to be 50% . With an alpha risk of 0.05 and an accuracy of 2.5%, the minimum required sample size was determined to be 1,538 medical records (50% of which were from women). The study sample was stratified by age group and sex, considering the visit frequencies recorded in the National Health System’s primary care information system for 2018. Study participants were divided into three age groups: 18–59 years, 60–74 years, and >75 years, based on reference ages from prior studies . A simple random sampling method with k = 5 was used to select the medical records of patients attended in the past 3 years. Data Analysis Considering the higher frequency of female patients attending primary care consultations (In Spain, 9.6 vs. 5.7 visits per year in 2022 ) the adjusted LVPs and preventable adverse events rates have been calculated to correct for this effect in the interpretation of the data. The chi-square test with Yates correction were used to compare the frequency of LVPs in men and women, and the Cochran-Mantel-Haenszel test to analyze differences in the adjusted rate between the sexes. To analyze the relationship between the presence of an adverse event (dependent variable) and the corresponding independent variables such as age, the number of daily medications, patient’s gender, physician’s gender, and their interaction, a Generalized Linear Mixed Model (GLMM) was used. This model accounts for random effects to cover cases where the same patient is affected by more than one adverse event. Statistical significance for all tests was determined at p < 0.05 (two-tailed). The analyses were conducted using the SPSS statistical software and the RStudio V.1.1.463 programming language. A retrospective cohort study in which a random selection of patients attending primary care consultations in Alicante province (Spain) was performed. The STROBE checklist was used as a guide for reporting the study . The study protocol was published first . Spanish primary care is a cornerstone of the country’s healthcare system, offering accessible, and comprehensive healthcare services to individuals who require ongoing medical attention, often due to chronic illnesses. This level of care ensures universal access to quality healthcare for individuals of all ages. Preventive care, early intervention, and continuity of care is provided by multidisciplinary teams of family physicians, pediatricians, nurses, and allied health professionals. In this study, overuse was defined as continuing to do what should not be done (e.g., ignoring the “Do Not Do” recommendations). The LVPs considered in the study were derived from the Spanish Commitment to Quality initiative list of recommendations , formulated according to the Choosing Wisely campaign’s methodology to mitigate overuse . Adverse event was defined as injury resulting from medical management or a complication, rather than the underlying disease, leading to extended hospitalization and/or disability at discharge from medical care . Gender bias in health refers to unjustifiable differences in treatment between women and men based on scientific evidence. This bias arises from assuming gender differences where there are none or ignoring genuine differences that necessitate a distinct approach according to evidence . The Research Ethic Board of the Sant Joan Hospital approved the study protocol reference 21/061. It was registered on ClinicalTrials.gov https://clinicaltrials.gov/study/NCT05233852 (NCT05233852). A group of reviewers (n = 40) was formed and trained in the study LVPs identification and data collection procedures. Training was provided using anonymized records. During the training, all reviewers assessed the same cases, and concordance was measured using Cohen’s kappa coefficient. A score of 0.63 or higher was deemed acceptable, while a score of 0.84 or higher was considered excellent. Training concluded once an excellent level of concordance was reached. The list of reviewers involved in this study is provided in . Each reviewer independently assessed selected medical records and recorded study data. Upon identifying an LVP, the reviewer evaluated potential adverse events and, if present, assessed severity and harm extent using the Woods et al scale, where higher scores indicate greater severity and a stronger relationship between the practice and harm. Events with scores above 3 were classified as adverse events, while those above 4 were attributed to LVPs. A blinded recording system was employed. Data were extracted from the primary care electronic medical records database, Abucasis, between 15 March 2023 and 31 August 2023. In Alicante, as well as in the rest of Spain, all the information about a patient is registered in a unique electronic medical record. Data from medical records were collected using an electronic data collection platform, which incorporated a trigger tool to facilitate the identification and recording of adverse events. This tool, previously used in the SOBRINA study , was based on recommendations by Rosenberg et al . The LVPs considered in this study were agreed in a previous study . An online consensus technique involving 33 health professionals from family medicine, cardiology, intensive care, and geriatrics was conducted to reach a consensus on LVPs considering three aspects: 1) if it was still a relatively frequent LVP in primary care; 2) its frequency of application was different between men and women, with a probable association with sex or gender; and 3) if the LVP could cause a severe adverse event in the patient. Panelists marked their level of agreement/disagreement on a scale of 0 (strongly disagree) to 10 (strongly agree). The resulting score was the sum of the three scales. The LVPs that yielded a score of 20 points, or more were retained (consensus criterion) and those scoring under 10 points discarded. Then, a select group of panelists were asked to review the final list of LVPs. Additionally, during a session with experts (clinicians and gender bias in health researchers), there was a debate and consensus reached on whether the differences between men and women that could be observed in these previously selected LVPs should be attributed to the presence of gender inequalities in healthcare. In cases where treatment (or test) is indicated for a condition or symptoms that are more prevalent in patients of a specific sex, it was assumed that the risk of overtreatment (or overuse) in patients of that particular sex is higher than in the other. However, when there is no evidence that the symptoms or prevalence of the condition for which the treatment is provided differ between sex, it was assumed that differences in practice application are due to gender-related reasons. We used a scale ranging from −5, entirely attributable to belief on that are more prevalent in patients of a specific sex, to +5, entirely attributable to gender bias. shows the outcome of this consensus among experts. The proportion of medical records with at least one LVP was expected to be 50% . With an alpha risk of 0.05 and an accuracy of 2.5%, the minimum required sample size was determined to be 1,538 medical records (50% of which were from women). The study sample was stratified by age group and sex, considering the visit frequencies recorded in the National Health System’s primary care information system for 2018. Study participants were divided into three age groups: 18–59 years, 60–74 years, and >75 years, based on reference ages from prior studies . A simple random sampling method with k = 5 was used to select the medical records of patients attended in the past 3 years. Considering the higher frequency of female patients attending primary care consultations (In Spain, 9.6 vs. 5.7 visits per year in 2022 ) the adjusted LVPs and preventable adverse events rates have been calculated to correct for this effect in the interpretation of the data. The chi-square test with Yates correction were used to compare the frequency of LVPs in men and women, and the Cochran-Mantel-Haenszel test to analyze differences in the adjusted rate between the sexes. To analyze the relationship between the presence of an adverse event (dependent variable) and the corresponding independent variables such as age, the number of daily medications, patient’s gender, physician’s gender, and their interaction, a Generalized Linear Mixed Model (GLMM) was used. This model accounts for random effects to cover cases where the same patient is affected by more than one adverse event. Statistical significance for all tests was determined at p < 0.05 (two-tailed). The analyses were conducted using the SPSS statistical software and the RStudio V.1.1.463 programming language. In total, 1,538 electronic medical records were reviewed, but after exclusions (due to missing data), a total of 1,516 patients were included, being 911/1,516 (60.1%) female . The mean age of patients attended during the study period was for male 56.4 years (SD 19.4) and female patients 55.2 years old (DT 20.8). They were taking an average of 3.7 medications daily (range 1–25). A total of 245 (68.1%) patients treated by male family physicians were taking less than five drugs per day, while 115 (31.9%) were taking five or more drugs daily. In the case of patients treated by female family physicians, 769 (67.85%) were taking less than five drugs per day and 365 (32.1%) were taking five or more drugs per day. The most frequent main diagnoses in this sample were hypertension, and Type 2 Diabetes. H 1 A higher number of LVPs are identified among female patients. The prevalence of patients suffering LVPs was 465/1,516, 30.7%. A total of 221/605 (36.5%) LVPs occurred in male patients, meanwhile 417/911 (45.7%) LVPs occurred in female patients ( p -value = 0.022). As the patient’s age increased, the frequency of LVPs also increased ( p -value = 0.024). The number of patients who experienced at least one LVP was 465 (male patients 170/605, 28.1% and female patients 295/911, 32.4%). Among 286 patients, two or more LVPs were registered (103/605, 17.0% male patients; 183/911, 20.1% female patients). The data confirm H 1 , with the LVPs considered in this study being more frequent among women than among male patients. H 2 Male and female family physicians are responsible of a similar number of LVPs. A total of 156/360 (43.3%) LVPs were observed in patients treated by male physicians and 482/1,134 (42.5%) in patients treated by female physicians ( p -value = 0.950). Analyzing these LVPs considering both the patient’s sex and the professional’s sex , it was observed that only when the family doctor was female, female patients experienced more LVPs than male patients. The findings suggest rejecting, at least partially, H 2 , as there was a higher frequency of LVPs among female patients treated by female family physicians compared to male patients treated by the same female family physicians. H 3 A higher number of preventable adverse events related to LVPs are identified among female patients. During the review of electronic medical records, a total of 124 adverse events were identified among 105 patients subjected to one or multiple LVPs in the study (124/638, 19.4%), of which 35/221 (15.8%) were experienced by male patients and 89/417 (21.3%) by female patients. A total of 26 (26/105, 24.7%) patients experienced two or more preventable adverse events related to the included LVPs in the study. These occurrences of experiencing more than one adverse event related to LVPs were concentrated in individuals aged 60 or older. Among male patients, six (19.35%) of them experienced more than one adverse event, all of whom were treated by male physicians. Among female patients, 20 (27.03%) of them experienced more than one adverse event, of which 6 were treated by male physicians (30%) and 14 by female physicians (70%) ( p -value = 0.465). The severity tendency of the adverse events was slightly higher in the case of female patients, but the difference was not statistically significant ( p -value = 0.058). The data allow us to accept H 3 because the data trend suggests that female patients experience a higher volume of preventable adverse events related to LVPs than males treated for the same health issue. H 4 Male and female family physicians are involved in a similar number of preventable adverse events related to LVPs. When analyzing the interaction between patient sex and physician sex a higher proportion of patients attended by male physicians experienced an adverse event compared to those attended by female physicians ( p -value<0.000), and in cases where a female physicians attended, female patients experienced more adverse events than male patients ( p -value<0.002) . The severity of adverse events suffered by male and female patients were higher when the patients were attended by male family physicians ( p -value<0.000). Most adverse events were related to medication (99, 79.8%). No differences were identified in the nature of the adverse events suffered by patients when treated by male and female family physicians ( p -value = 0.286). As the patient’s age and the number of daily medications taken by the patient increase, the number of adverse events tends to rise. An interaction effect was observed between the patient’s sex and the family physician’s sex, such that when both the physician and the patient are female, there is a significant increase in the probability of adverse events. However, when the patient is male, being attended by a female physician reduces the probability of experiencing an adverse event. Based on suggestive data indicating that therapeutic decisions made by male or female family practice had a differentiated effect in terms of the occurrence of preventable adverse events related to LVPs among their patients of either sex, H 4 was rejected. H 5 Overuse-related adverse events attributed to sex/gender reasons exhibit similarities in specific conditions. Despite a similar frequency of unnecessary prescriptions or tests for both men and women, whether related to LVPs associated with conditions more prevalent in female patients or influenced by gender-based reasons, a higher number of adverse events occurred in cases linked to LVPs potentially driven by gender bias . Consequently, H 5 was rejected based on the data. The data from this study supports the notion that overutilization poses a risk to patient safety . Additionally, it suggests rejecting the assumption that the frequency of LVPs and the number of preventable adverse events involving male and female family physicians are similar; rather, it supports the idea that women experience a higher number of LVPs and related adverse events. The data suggests an interaction effect between the patient’s and physician’s gender regarding the frequency of both severe and mild adverse events, deserving further attention. This interaction may be specific to the type of LVPs studied in this research. Furthermore, LVPs influenced by gender-based conceptions are more likely to result in unsafe care. The extent and number of LVPs and their economic impact have been studied for years in various countries and healthcare levels . Some recent studies have emerged identifying the impact of LVPs in terms of patient safety, linking LVPs to the occurrence of preventable adverse events . In one of these initial studies on this topic, our group found that female patients experienced more adverse events related to LVPs than males . In this second study, we aimed to delve deeper into this issue that affects women’s health. To address this issue, first, a set of LVPs was identified where these differences between males and females could be more pronounced. Second, a review of a set of medical records of patients of both sexes was conducted to describe the frequency at which male and female patients experienced preventable adverse events related to these LVPs. In this study, women, whose medical records were analyzed, experienced a higher volume of these LVPs during the primary care they received. This data suggests that utilization play a significant role in overutilization. It also corroborates previous observations indicating gender differences that negatively impact the quality of care received by women . This study further delves into analyzing the discrepancy in LVPs frequency between men and women, specifying that when a female patient is treated by a female physician, there is a higher likelihood (up to 7% more) of experiencing one of the LVPs analyzed in this study. These results could be due to family physicians, as suggested in other studies , assuming differences between men and women when there are none. It is not new, the fact that some diseases are more often attributed to men and others to women generating a bias in diagnostic criteria and access to complementary tests or treatments . However, the higher number of adverse events in those cases suspected of gender bias is a novel finding. There is evidence that shows that gender, as a social construct, has a substantial impact on health behaviors, access to and use of health systems, and health system responses . Gender bias can be defined as a systematic error in the social construction of the disease’s history and symptoms, which produces inequitable responses to health problems from the health services, as well as discriminatory responses by professionals . The strategies designed to reduce overutilization could consider these findings and refine their approach, recognizing that female patients have a higher probability of receiving an LVP than male patients. One possible explanation is the higher utilization or healthcare-seeking behavior among women due to a persistent gender bias in our society, where they often take responsibility for family health. Another explanation lies in the recent feminization of the medical profession, which might result in a younger female workforce and therefore, less experience among these female physicians compared to their male counterparts. It could also be attributed to patients exerting more pressure on female physicians than on male physicians to undergo diagnostic tests or specific treatments. This could be influenced by the different status assigned to female professionals, owing to the enduring gender biases , as opposed to their male counterparts. Data collected reveals that nearly a quarter of LVPs ultimately result in a preventable adverse event . In other words, in 2 out of every 10 LVPs, harm is caused by an action on the patient through a treatment that should not have been initiated. Similar to other studies, we have also observed that among older patients, a higher number of preventable adverse events occurred . In this case, the data also suggests that in more severe adverse events, the involvement of male family physicians is higher than that of female physicians. Furthermore, female patients, when treated by female family physicians, exhibited a higher proportion of mild adverse events than male patients. We know that overutilization poses a threat to the survival of healthcare systems. Moreover, its risk to patients is becoming increasingly evident. In the majority of preventable adverse events identified, the severity of the damage was mild. However, nearly two out of ten resulted in severe permanent consequences for the patient. Both in hospitals and primary care, it has been emphasized that LVPs were not as innocuous as previously thought. Indicating, for example, a test when it’s unnecessary opens up possibilities of initiating equally unnecessary treatments, risking the patient and burdening the healthcare system with unnecessary costs, to the detriment of other patients in need of care. Considering the latest data indicating that around 7% of patients in primary care in Spain experience an adverse event in a year, the findings of this study clearly point to overutilization as a risk factor, given that the frequency of adverse events associated with LVPs is nearly 3 times higher than expected. Other studies conducted in various countries suggest rates of adverse events in primary care ranging between 1% and 24% , with the most common frequency being around 6%–7%, and 1.6% considered as severe events . LVPs pose a threat to the sustainability of healthcare systems due to the increased costs they entail . Initiatives implemented to reduce overuse have yielded diverse outcomes . The debate on overutilization and its impact on individuals and systems has expanded, verifying that multicomponent interventions are the most effective in reducing overuse. These interventions, combining various elements, should incorporate information regarding biases based on sex/gender related belief that contribute to women receiving more LVPs, especially when some culminate in adverse events. Implications These findings have implications for the content of programs aimed at raising awareness among professionals about the impact of overuse on health outcomes. Given these data, it is advisable to address these potential differences in outcomes between male and female patients when planning awareness campaigns. Some examples highlight that collaboration between patients, caregivers, and clinicians yields positive outcomes in primary care, and a similar approach could be pursued in this case to reduce overuse and concurrently enhance patient safety . Decision aids aimed at increasing patient safety could consider these results to prioritize situations where differences between men and women are more pronounced. Moreover, in clinical practice, particularly concerning these LVPs, clinicians should consider that an unnecessary indication may have an unexpected and negative impact leading to adverse events. Therefore, when making decisions, they should acknowledge that a low-value indication is not harmless and may negatively affect patient safety. They should assess whether the therapeutic approach is disproportionately affecting female patients compared to male patients, inadvertently impacting their health status. Finally, patient schools (e.g., patient experts) and informal caregiver education could serve as suitable platforms to educate both groups about the risks of LVPS in terms of patient safety. In essence, as patient safety remains a challenge for all primary care professionals , this data suggests initiating discussions about how overuse compromises patient safety. Despite practices that may seem inconsequential, they can result in a suboptimal level of care. These results raise new questions. For instance, to what extent do defensive medicine practices causing overuse differ between male and female professionals, and which patient profiles are more susceptible? Additionally, do decision aids integrated into digital systems reduce disparities in LVPs between male and female patients? Studies on overutilization have identified the frequency of various LVPs in different countries. However, transnational comparisons of these LVPs have not been conducted and could be valuable in determining which strategies are more effective in reducing overuse, considering diverse factors, among male and female patients. Limitations The sample size was calculated for a set of LVPs and not to determine the impact of gender on the outcome variables for each individual LVP. This study did not delve into differentiating whether the found differences were due to sex (biological) or gender (social) issues. Since the medical record system (Abucasis) does not include data on race, ethnic group, or socioeconomic status, these variables could not be considered. The clinical experience of the professionals who attended to the patients whose medical records were reviewed could not be determined since such information is not encoded and accessing it would have compromised the anonymization of the data. The data extraction for professionals was limited to gender. Professionals did not review their own histories, all coding and recording of information relied on the work of the reviewers. These data were collected from a limited number of cases of each LVP. More work is needed to understand the drivers of low-value care on males and females when attended by male and female family physicians. Conclusion The prescriptions and tests considered of low value for the patient, as studied in this research, correspond to common and frequent situations in primary care. They represent a small part of the myriad of conditions addressed at this healthcare level, showcasing only a fraction of the broader reality within primary care settings. Consequently, they serve as a mere sample, underscoring a much larger reality where overuse poses a severe challenge for professionals, patients and healthcare systems. This issue not only jeopardizes patients but also poses a risk, as although the majority of safety incidents are deemed minor and lack permanent consequences, our findings indicate that in some cases, they significantly impact patients’ health. Moreover, these results prompt a deeper reflection and exploration into the influence that gender differences—stemming from both biological and social reasons—might have on overuse and the frequency and nature of associated safety incidents. These findings have implications for the content of programs aimed at raising awareness among professionals about the impact of overuse on health outcomes. Given these data, it is advisable to address these potential differences in outcomes between male and female patients when planning awareness campaigns. Some examples highlight that collaboration between patients, caregivers, and clinicians yields positive outcomes in primary care, and a similar approach could be pursued in this case to reduce overuse and concurrently enhance patient safety . Decision aids aimed at increasing patient safety could consider these results to prioritize situations where differences between men and women are more pronounced. Moreover, in clinical practice, particularly concerning these LVPs, clinicians should consider that an unnecessary indication may have an unexpected and negative impact leading to adverse events. Therefore, when making decisions, they should acknowledge that a low-value indication is not harmless and may negatively affect patient safety. They should assess whether the therapeutic approach is disproportionately affecting female patients compared to male patients, inadvertently impacting their health status. Finally, patient schools (e.g., patient experts) and informal caregiver education could serve as suitable platforms to educate both groups about the risks of LVPS in terms of patient safety. In essence, as patient safety remains a challenge for all primary care professionals , this data suggests initiating discussions about how overuse compromises patient safety. Despite practices that may seem inconsequential, they can result in a suboptimal level of care. These results raise new questions. For instance, to what extent do defensive medicine practices causing overuse differ between male and female professionals, and which patient profiles are more susceptible? Additionally, do decision aids integrated into digital systems reduce disparities in LVPs between male and female patients? Studies on overutilization have identified the frequency of various LVPs in different countries. However, transnational comparisons of these LVPs have not been conducted and could be valuable in determining which strategies are more effective in reducing overuse, considering diverse factors, among male and female patients. The sample size was calculated for a set of LVPs and not to determine the impact of gender on the outcome variables for each individual LVP. This study did not delve into differentiating whether the found differences were due to sex (biological) or gender (social) issues. Since the medical record system (Abucasis) does not include data on race, ethnic group, or socioeconomic status, these variables could not be considered. The clinical experience of the professionals who attended to the patients whose medical records were reviewed could not be determined since such information is not encoded and accessing it would have compromised the anonymization of the data. The data extraction for professionals was limited to gender. Professionals did not review their own histories, all coding and recording of information relied on the work of the reviewers. These data were collected from a limited number of cases of each LVP. More work is needed to understand the drivers of low-value care on males and females when attended by male and female family physicians. The prescriptions and tests considered of low value for the patient, as studied in this research, correspond to common and frequent situations in primary care. They represent a small part of the myriad of conditions addressed at this healthcare level, showcasing only a fraction of the broader reality within primary care settings. Consequently, they serve as a mere sample, underscoring a much larger reality where overuse poses a severe challenge for professionals, patients and healthcare systems. This issue not only jeopardizes patients but also poses a risk, as although the majority of safety incidents are deemed minor and lack permanent consequences, our findings indicate that in some cases, they significantly impact patients’ health. Moreover, these results prompt a deeper reflection and exploration into the influence that gender differences—stemming from both biological and social reasons—might have on overuse and the frequency and nature of associated safety incidents. |
Biosafety procedures for handling intraoperative surgical samples during COVID-19 pandemic: an Italian pathology laboratory experience | 08555cb2-66e1-4ca6-acb0-9e6d6322e593 | 8183348 | Pathology[mh] | As is know, a novel viral pandemic, driven by the 2019-nCoV, also known as severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2), started from the initial epicenter in Wuhan (China) and spread all over the world with dramatic healthcare consequences , . On January 31st 2020, the number of positive patients in Europe and mostly in Italy significantly increased. Then the infection rapidly spread in Northern Italy where 11 municipalities were placed under quarantine. On March 9th 2020, given the persistent and steep increase in deaths and positive cases, the Italian Prime Minister, Giuseppe Conte, imposed a lockdown on all of Italy and placed more than 60 million people in quarantine. Up to now, Italy is one of the European centers with the most active Coronavirus cases with 233,836 positive cases and 33,601 total deaths as of June 3rd. The pandemic growth congested Italian hospitals with symptomatic patients and turned most medical and paramedical staff to this dramatic emergency , . At the same time, unfortunately, neoplastic pathologies, requiring urgent surgical and oncological treatment continue to affect Italian population. Our hospital, Policlinico Gemelli (Rome, Italy), represents one of the biggest Italian referral centers for oncological patients, including also a large number of gynecological cancers. The high volume of surgical procedures demanded an equally high volume of intraoperative pathological examinations (frozen) but posed an additional major challenge for the safety of the staff involved. In agreement with Internal and National guidelines for biosafety in containing the pandemic spread, our pathology laboratory adopted further safety procedures for processing fresh surgical samples from suspect patients who are also affected by severe neoplastic pathologies. The current commentary reports our experience in the past two months (since March 9 th ) for a total of 1271 frozen exams from 893 suspect COVID-19 patients (31 confirmed with PCR). Patients characteristics and frozen section details are summarized in . It is mandatory to ensure that all the possible biosafety conditions are met for the medical and technical staff involved in this procedure. According to the recent WHO recommendations, all samples sent to the histopathology laboratory should be considered as potentially infectious . Moreover, the Centers for Disease Control and Prevention (CDC) released an Interim Laboratory Biosafety Guidelines for Handling and Processing Specimens Associated with Coronavirus Disease 2019 (COVID-19) . Adapting the indication from both guidelines, we developed an internal protocol for intraoperative examination of fresh specimens from patients with suspect or proven COVID-19 infection in order to reduce the risk for laboratory personnel. We defined the following internal protocol for the management of frozen exams. Only undeferrable surgical procedures involving COVID-19-positive/suspect patients should be performed. Surgical procedures involving COVID patients should be planned and scheduled in order to avoid cross-contamination between patients; if possible, they should be performed in a dedicated operative section. Specimen transport All specimens should be considered as possibly positive for COVID-19. The hospital should designate a well-defined path, ensuring dedicated corridors and elevators to COVID patients/specimens. Specimen transit from operating room to the COVID-dedicated frozen section room (FSR), must be as short as possible in a pre-defined direct path and away from other people within the hospital in order to minimize the chances of outbreak. Specifically trained transport personnel, equipped with personal protective equipment (PPEs), should be the same from origin to destination. Utilized lifts must be sanitized. A dedicated specifically trained 24/7 cleaning team is suggested. Adequate training of all the involved personnel is mandatory. Once the specimen has been admitted to the FSR, the door must be kept closed. Frozen room should have controlled inward directional airflow: high FSR air exchange cycles are recommended (> 25 exchanges/h), contributing to effectively reduce the viral load within FSR. Personnel involved in the FS processing should not leave the room during the procedure. One technician and one pathologist only are allowed in the FSR. The frozen-dedicated personnel wear protective equipment such as disposable gloves; solid-front or wrap-around gown; scrub suit, or coverall with sleeves that fully cover the forearms; head covering; shoe covers or dedicated shoes; safety goggles or face shield; EU FFP2 or surgical masks . Dedicated freezing microtome for specimens of suspect or proven COVID-19 patients. Dedicated certified Class II Biological Safety hood for sample grossing. Separate sample storage in a dedicated certified Class II Biological Safety cabinet. Dedicated hand-wash sink. All possible sources of infection while removing PPE should be avoided; therefore an adequate procedure must be adopted by the healthcare operators. Please consider the following order of actions: At first, remove the first pair of gloves, which is likely to be highly contaminated. Subsequently, remove with care all other PPEs during the doffing procedure, according to this order: protective suite, shoe cover, head cap, face mask and glasses, taking care to handle the face mask by the ear laces, without absolutely touching its external side. The second pair of gloves must be removed as the very last PPE. Finally, proceed immediately with hand disinfection with hydro-alcoholic solution. Removed PPE must be placed outside the FSR in dedicated areas ensuring the virus is not transmitted to the healthcare worker. Disinfection of external and internal cabinet components must be performed daily with 99% ethanol or 0.5% chloro-derivate . Hood and frozen microtome should be sanitized after every use. FSR and surrounding donning/doffing areas must be disinfected as soon as possible after each procedure. Well identifiable containers for infectious-risk health waste (IRHW) should be dedicated for single-use potentially infected disposables (sharps and other material). Reusable materials should be decontaminated, washed, dried, and or disinfected/sterilized. The COVID-19 epidemic has not yet shown signs of receding despite a recent apparent drop in the number of new cases. In fact, starting on May 4 th , Italy entered in Phase two, and relaxed the lockdown, similar to other European countries. We retain that our laboratory protocols were useful to control and limit the possibilities of infection in personnel. In fact, to date, no infected workers have been recorded in our unit. Moreover, despite all adopted biosafety procedures, the mean time of intra-operatory diagnosis was 29.38 minutes (range 17-49 minutes). Therefore, if the personnel is well trained, our suggested protocols are able to ensure a good balance between laboratory security and rapid diagnostic times. In our opinion, also in phase two, we must not lower our and evaluation of intraoperative surgical samples should still be limited to what is strictly necessary. Our protocol together with the current guidelines may provide further safety for pathology laboratory workers and prevent infection diffusion. Although the pathologist has limited direct contacts with patients, the intraoperative exam of fresh, potentially infected specimens carries high infectious risk and needs to be performed in safe conditions. The resiliency of our universal healthcare system is based on the strength on the safety of all the system components for the best possible patient care in conditions of stress. |
The elicitation of patient preferences for hip replacement surgery: a discrete choice experiment | 065c816d-dd61-46bc-865c-9a2ac505bbb3 | 11834257 | Surgical Procedures, Operative[mh] | Total hip replacement (THR) is one of the most frequent surgeries worldwide; approximately one million patients undergo THR surgery annually . In Germany, THR is one of the most common inpatient surgeries, with nearly 250,000 procedures per year. More than 160,000 patients with hip arthrosis undergo primary elective THR annually in approximately 1,250 hospitals . Thereby, the frequency of THR surgeries in Germany differs widely from region to region; Schäfer et al. report a variation factor of 2.8 between federal states, with higher rates in the south and the north-west. They claim that this may be caused – at least to some extent – by “the absence of standardized decision criteria as basis for the indication of THR in a transparent and consistent way”. The numbers shown above demonstrate that patients in Germany have the option to choose from a large number of hospitals to undergo THR surgery. However, data from the German external hospital quality assurance system indicate that the effectiveness of treatment (i.e. the success of the surgery) varies between hospitals. Therein, for each patient, hospitals must document certain interventions as THR surgery based on a set of in-hospital related quality indicators . The latest German hospital quality report from 2024 lists the share of hospitals with quality deficits ranging between 4.15%, regarding “Prevention of falls measures” (47 out of 1,132 hospitals), and 24.46%, regarding “Indication for hip endoprosthesis or component replacement” (251 out of 1,026 hospitals) . These quality differences are not surprising since other studies also demonstrate hospital-related variations in the quality of THR surgery. For example, a recently published meta-analysis of 44 studies focusing on the impact of hospital volume on the outcomes of THR indicates that low-volume hospitals are associated with higher rates of surgical site infections, 90-day complications, costs, and mortality . Previous evidence from Germany shows lower risk-adjusted in-hospital mortality rates for THR in high-volume hospitals compared to low-volume hospitals (0.10% vs. 0.23%) . Against this backdrop, it seems important for patients to be able to make an informed choice; i.e. to select the best-performing or “right” hospital. Therefore, public reporting aims to support patients and other consumers by providing quality information about health care providers such as hospitals, nursing homes, or practitioners . For this purpose, publicly available Internet rating websites have been developed and implemented in many high-income countries like the United Kingdom, the United States, and Germany . For example, the leading German public reporting portal “Weisse Liste” (in English, white list, WL) was jointly initiated by the non-profit foundation “Bertelsmann Stiftung” and the main patients’ and consumers’ associations in 2008 . More recently, the operation of WL ceased in March 2024; however, it will serve as an important element of its successor “Klinik-Atlas” in terms of, e.g. user guidance, selected quality measures, and other aspects . The latter is the first German government-run public reporting website with the intention to provide quality-related information on the hospital level for the public as of May 2024 . So far, there is limited quality information regarding the detailed types of information that will be provided on “Klinik-Atlas” but it is likely to be extended shortly and become similar to WL (e.g. number of cases treated, clinical measures, staff-related information), apart from novel types of information . Public reporting websites should fulfil a number of requirements in order to have an impact on health care delivery . They should present the information that consumers value most while keeping the amount of quality information low in order to minimise complexity . As shown in previous literature, calculating composite measures based on consumer preferences is one way to achieve this . The objective of our study is to learn more about consumers’ preferences when choosing a hospital for THR surgery on the German public reporting website WL. We focus exclusively on the publicly reported quality information about hospitals regarding elective THR surgery and exclude parameters related to patients as individuals. This may help in the future to calculate transparent weighted composite measures, which may ease patients making a conscious choice. Hence, we address the following three questions: How do patients rate different publicly available hospital quality information regarding THR surgery? What relative importance do patients assign to different types of quality information? Can we identify groups of respondents with similar preferences? Methodology of the discrete choice experiment The objective of this study is to elicit patients’ preferences regarding hospital choice for THR surgery through the utilisation of a discrete choice experiment (DCE). Building upon the theoretical work of McFadden and Lancaster , a DCE is a method that uses (survey) data on stated preferences based on a set of hypothetical choice scenarios (e.g. hospital choice) to systematically investigate the structure of individuals’ preferences. DCEs allow for reduced complexity for respondents due to the pairwise comparison of alternatives, which is an advantage compared with traditional formats for the elicitation of stated preferences, like rating- and ranking-based methods . In addition, they also facilitate realistic trade-off decisions . Therefore, DCEs appear increasingly often in the field of preference research in health care . The design and analysis of the DCE are based on standardised research practices for conducting conjoint analysis developed by the ISPOR Conjoint Analysis Good Research Practices Task Force . However, in contrast to previous studies focusing on the relative importance of quality information for hospital choice in general (e.g. ), we aim to investigate how patients rate the hospital quality information with respect to THR surgery publicly available on WL and the relative importance patients assign to this information. Based on this, our results might help determine relative weights for publicly reported quality information on WL for the calculation of a composite performance measure. Such weighted composite measures may be integrated in health report services in the future to support patients making a conscious hospital choice. Thus, instead of conducting both a systematic literature review and qualitative research (e.g. semi-structured interviews, focus groups) to identify and choose the most important attributes for patients’ preferences (see, e.g. ), we select our attributes in line with the publicly reported quality information provided on WL, one of the major health report cards in Germany at the time of the experiment. We focus on the information items primarily presented to users in the search interface of WL: (1) “Quality of treatment”, which indicates whether a hospital fulfils obligatory quality targets such as, e.g. certain treatment outcomes; (2) “Recommendation from other patients”, which provides information on the experiences of previous patients; (3) “Annual number of cases treated”, which is supposed to reflect the experience of a hospital, based on the assumption that high numbers of cases indicate more in-depth experience with the special treatment; (4) “Equipment and qualification”, which covers the (non-)satisfaction of compulsory quality targets in the provision of medical equipment and the appropriate qualification of medical staff; and (5) “EndoCert Certificate”, which indicates possible certifications that a hospital may have received from the “EndoCert Certification System”, an independent initiative for quality auditing for endoprosthetic care founded by the German Society for Orthopaedics and Orthopaedic Surgery . We focus exclusively on criteria on hospital quality and refrain from regarding individual patient-related features as e.g. travelling distance which depend on each patient’s situation and willingness to share such information. Moreover, due to transparency and simplicity the composite measure should only build upon primary information in order to be useful for each patient. Each attribute contains three levels (see Table ). We explain and illustrate each attribute to participants using simple examples and basic facts (for a detailed list of underlying information for each attribute as used on WL, please refer to Table A.1 of the Supplementary Material). We employ effect coding on all attributes (e.g. ). Effect coding enables us to construct estimates and standard errors of all levels against their deviation from each attribute’s mean. Hence, both estimates and standard errors are stable and independent of an arbitrary benchmark level. Survey design There are four parts to the survey. In the first part, we collect information about the participants’ motivation to use WL, their expectations, and their previous experience with HRCs. In the second part, respondents are presented with information on all five attributes and their levels and are asked to rate the attributes on a scale of 1–5 (1 = not at all important; 5 = very important) and to rank them against each other. The third section contains ten DCE choice tasks in which respondents are asked to choose between two hypothetical hospitals. The final section collects socio-demographic and health-related information on participants for subgroup analyses. Participants were also given the opportunity to provide feedback on the survey and to enter a prize drawing for 10 online vouchers worth EUR 50 each. Before the survey was launched, the questionnaire was anonymously piloted for clarity and comprehensibility by 20 people and modified accordingly. Experimental design We designed the survey using Sawtooth Software (Lighthouse Studio Version 9.14.0) as a full profile design, i.e. each choice set includes all five attributes. We generated the final set using the balanced overlap method, which permits the estimation of both main and interaction effects with standard errors below 0.05 and 0.1 and the highest D-efficiency score . The choice tasks are forced-choice tasks, i.e. respondents have to choose one of two hypothetical hospitals by making trade-offs between attributes and levels . With this approach, the experiment provides a setting that is close to reality, as comparable trade-off decisions are part of daily life . We administered the survey as an onsite-based survey on the German HRC Weisse Liste (WL). Over a 2-month period (April and May 2023), we invited all users of WL who either searched for information on THR surgery or performed hospital comparisons on THR surgery to participate in our study. Participants were free to take part and could terminate the survey at any time. Please refer to the supplementary material for an English translation of the questionnaire. Sample size Our study design consists of ten choice tasks per respondent, including two alternatives (i.e. hospitals) per choice task and a maximum number of three levels across all attributes. As suggested by Orme , the design specifications require a minimum of participants according to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\ge \frac{500l}{ta}$$\end{document} N ≥ 500 l ta , where N is the number of respondents, t the number of tasks, a the number of alternatives and l the maximum of levels. In our setting with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{t,a,l\}=\{10,2,3\}$$\end{document} { t , a , l } = { 10 , 2 , 3 } , we derive a minimum of 75 participants. However, since this number indicates only the lower bound limit for main effect estimation, we require at least 150 respondents, as recommended for achieving statistical robustness . Data analyses For data analyses, we use R Statistical Software (Version 4.2.2; R Foundation for Statistical Computing, Vienna, Austria) and the corresponding packages “mlogit” and “gmnl” . As a starting point, we use the standard multinomial logit model (MNL), as initially developed by McFadden . MNL is attractive because of its simplicity in terms of estimation and interpretation. However, it relies on the rather restrictive assumptions that preferences are homogeneous across individuals and that error terms are independently and identically distributed, both of which may conceal unobserved heterogeneity. We relax both assumptions with various alternative model specifications to test for unobserved heterogeneity. We consider this important because undetected heterogeneity may affect the estimation results and thus the appropriate weights of the included attributes, which may then be used for the calculation of a composite measure. Especially in the case of undetected group heterogeneity, we would need to adjust the composite measure for appropriate subgroups. We use the random parameter model (RPL), as proposed by McFadden and Train , in order to relax the assumption of taste homogeneity (while maintaining independent and identically distributed errors). This model extends MNL by using a continuously distributed random parameter for each individual. In addition, we retreat from independently and identically distributed errors (while maintaining homogeneous preferences) and assume that idiosyncratic errors are not identical but individually scaled, as suggested by Bhat and Fiebig et al. (S-MNL). Similar to RPL, S-MNL also captures unobserved heterogeneity. However, its advantage over RPL is its more parsimonious description, which allows for more efficient estimation. Next, we use the general MNL model (G-MNL) introduced by Fiebig et al. , which combines the characteristics of RPL and S-MNL. As pointed out by Fiebig et al. , G-MNL puts more weight on randomness in the tails compared to RPL due to the inclusion of scaled error terms. Therefore, G-MNL captures unobserved heterogeneity better than RPL, which focuses more on the centre of the distribution. Besides assuming a continuous distribution of preference heterogeneity, as in the models above, we also estimate standard latent class models (LC) and latent class models enhanced by random parameters (MM-MNL) . These model types assume a discrete distribution of heterogeneity. In consequence, the unobserved heterogeneity clusters in groups rather than at the individual level, as in RPL or G-MNL. In the case of MM-MNL, both types of heterogeneity are combined so that individual unobserved heterogeneity can occur within clustered groups. In order to identify the best-fitting model, the Bayesian information criterion (BIC) and conditional Akaike information criterion (CAIC) are used. We refrain from using the Akaike information criterion (AIC) since it is known to be insufficiently restrictive, especially in multi-class models, favouring too many groups, whereas the BIC and CAIC perform well (see, e.g. ). The objective of this study is to elicit patients’ preferences regarding hospital choice for THR surgery through the utilisation of a discrete choice experiment (DCE). Building upon the theoretical work of McFadden and Lancaster , a DCE is a method that uses (survey) data on stated preferences based on a set of hypothetical choice scenarios (e.g. hospital choice) to systematically investigate the structure of individuals’ preferences. DCEs allow for reduced complexity for respondents due to the pairwise comparison of alternatives, which is an advantage compared with traditional formats for the elicitation of stated preferences, like rating- and ranking-based methods . In addition, they also facilitate realistic trade-off decisions . Therefore, DCEs appear increasingly often in the field of preference research in health care . The design and analysis of the DCE are based on standardised research practices for conducting conjoint analysis developed by the ISPOR Conjoint Analysis Good Research Practices Task Force . However, in contrast to previous studies focusing on the relative importance of quality information for hospital choice in general (e.g. ), we aim to investigate how patients rate the hospital quality information with respect to THR surgery publicly available on WL and the relative importance patients assign to this information. Based on this, our results might help determine relative weights for publicly reported quality information on WL for the calculation of a composite performance measure. Such weighted composite measures may be integrated in health report services in the future to support patients making a conscious hospital choice. Thus, instead of conducting both a systematic literature review and qualitative research (e.g. semi-structured interviews, focus groups) to identify and choose the most important attributes for patients’ preferences (see, e.g. ), we select our attributes in line with the publicly reported quality information provided on WL, one of the major health report cards in Germany at the time of the experiment. We focus on the information items primarily presented to users in the search interface of WL: (1) “Quality of treatment”, which indicates whether a hospital fulfils obligatory quality targets such as, e.g. certain treatment outcomes; (2) “Recommendation from other patients”, which provides information on the experiences of previous patients; (3) “Annual number of cases treated”, which is supposed to reflect the experience of a hospital, based on the assumption that high numbers of cases indicate more in-depth experience with the special treatment; (4) “Equipment and qualification”, which covers the (non-)satisfaction of compulsory quality targets in the provision of medical equipment and the appropriate qualification of medical staff; and (5) “EndoCert Certificate”, which indicates possible certifications that a hospital may have received from the “EndoCert Certification System”, an independent initiative for quality auditing for endoprosthetic care founded by the German Society for Orthopaedics and Orthopaedic Surgery . We focus exclusively on criteria on hospital quality and refrain from regarding individual patient-related features as e.g. travelling distance which depend on each patient’s situation and willingness to share such information. Moreover, due to transparency and simplicity the composite measure should only build upon primary information in order to be useful for each patient. Each attribute contains three levels (see Table ). We explain and illustrate each attribute to participants using simple examples and basic facts (for a detailed list of underlying information for each attribute as used on WL, please refer to Table A.1 of the Supplementary Material). We employ effect coding on all attributes (e.g. ). Effect coding enables us to construct estimates and standard errors of all levels against their deviation from each attribute’s mean. Hence, both estimates and standard errors are stable and independent of an arbitrary benchmark level. There are four parts to the survey. In the first part, we collect information about the participants’ motivation to use WL, their expectations, and their previous experience with HRCs. In the second part, respondents are presented with information on all five attributes and their levels and are asked to rate the attributes on a scale of 1–5 (1 = not at all important; 5 = very important) and to rank them against each other. The third section contains ten DCE choice tasks in which respondents are asked to choose between two hypothetical hospitals. The final section collects socio-demographic and health-related information on participants for subgroup analyses. Participants were also given the opportunity to provide feedback on the survey and to enter a prize drawing for 10 online vouchers worth EUR 50 each. Before the survey was launched, the questionnaire was anonymously piloted for clarity and comprehensibility by 20 people and modified accordingly. We designed the survey using Sawtooth Software (Lighthouse Studio Version 9.14.0) as a full profile design, i.e. each choice set includes all five attributes. We generated the final set using the balanced overlap method, which permits the estimation of both main and interaction effects with standard errors below 0.05 and 0.1 and the highest D-efficiency score . The choice tasks are forced-choice tasks, i.e. respondents have to choose one of two hypothetical hospitals by making trade-offs between attributes and levels . With this approach, the experiment provides a setting that is close to reality, as comparable trade-off decisions are part of daily life . We administered the survey as an onsite-based survey on the German HRC Weisse Liste (WL). Over a 2-month period (April and May 2023), we invited all users of WL who either searched for information on THR surgery or performed hospital comparisons on THR surgery to participate in our study. Participants were free to take part and could terminate the survey at any time. Please refer to the supplementary material for an English translation of the questionnaire. Our study design consists of ten choice tasks per respondent, including two alternatives (i.e. hospitals) per choice task and a maximum number of three levels across all attributes. As suggested by Orme , the design specifications require a minimum of participants according to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N\ge \frac{500l}{ta}$$\end{document} N ≥ 500 l ta , where N is the number of respondents, t the number of tasks, a the number of alternatives and l the maximum of levels. In our setting with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{t,a,l\}=\{10,2,3\}$$\end{document} { t , a , l } = { 10 , 2 , 3 } , we derive a minimum of 75 participants. However, since this number indicates only the lower bound limit for main effect estimation, we require at least 150 respondents, as recommended for achieving statistical robustness . For data analyses, we use R Statistical Software (Version 4.2.2; R Foundation for Statistical Computing, Vienna, Austria) and the corresponding packages “mlogit” and “gmnl” . As a starting point, we use the standard multinomial logit model (MNL), as initially developed by McFadden . MNL is attractive because of its simplicity in terms of estimation and interpretation. However, it relies on the rather restrictive assumptions that preferences are homogeneous across individuals and that error terms are independently and identically distributed, both of which may conceal unobserved heterogeneity. We relax both assumptions with various alternative model specifications to test for unobserved heterogeneity. We consider this important because undetected heterogeneity may affect the estimation results and thus the appropriate weights of the included attributes, which may then be used for the calculation of a composite measure. Especially in the case of undetected group heterogeneity, we would need to adjust the composite measure for appropriate subgroups. We use the random parameter model (RPL), as proposed by McFadden and Train , in order to relax the assumption of taste homogeneity (while maintaining independent and identically distributed errors). This model extends MNL by using a continuously distributed random parameter for each individual. In addition, we retreat from independently and identically distributed errors (while maintaining homogeneous preferences) and assume that idiosyncratic errors are not identical but individually scaled, as suggested by Bhat and Fiebig et al. (S-MNL). Similar to RPL, S-MNL also captures unobserved heterogeneity. However, its advantage over RPL is its more parsimonious description, which allows for more efficient estimation. Next, we use the general MNL model (G-MNL) introduced by Fiebig et al. , which combines the characteristics of RPL and S-MNL. As pointed out by Fiebig et al. , G-MNL puts more weight on randomness in the tails compared to RPL due to the inclusion of scaled error terms. Therefore, G-MNL captures unobserved heterogeneity better than RPL, which focuses more on the centre of the distribution. Besides assuming a continuous distribution of preference heterogeneity, as in the models above, we also estimate standard latent class models (LC) and latent class models enhanced by random parameters (MM-MNL) . These model types assume a discrete distribution of heterogeneity. In consequence, the unobserved heterogeneity clusters in groups rather than at the individual level, as in RPL or G-MNL. In the case of MM-MNL, both types of heterogeneity are combined so that individual unobserved heterogeneity can occur within clustered groups. In order to identify the best-fitting model, the Bayesian information criterion (BIC) and conditional Akaike information criterion (CAIC) are used. We refrain from using the Akaike information criterion (AIC) since it is known to be insufficiently restrictive, especially in multi-class models, favouring too many groups, whereas the BIC and CAIC perform well (see, e.g. ). Sample characteristics In total, 5,042 consumers of WL opened the survey link provided on WL. Of these, 4,678 stopped answering the survey directly after the short introduction page. Thus, 364 respondents returned the questionnaire. Clearly, the drop-out rate of 96.5% is rather high. However, our main target is to derive possibly accurate weights for a composite measure and therefore not necessarily to draw a representative sample which is likely to contain observations from users who do not have clear preferences or are unwilling to participate, as these we consider the primary source of noisy data. In the light of our objectives, we assume the drop-out rate not problematic. The following analysis reports only on the 177 respondents who fully completed the DCE part of the survey and provided consistent responses (48.63% completion rate). Table summarises the key characteristics of the sample. The median age of all respondents is 58 years, slightly more than half of all respondents are female (54.80%), and 68.93% stated (technical) university entrance qualification as their highest educational level. In addition, 53.56% of all participants report their health status to be good or better, and 54.80% claim that they suffer from a chronic condition. Finally, 58.76% of respondents state that they have used hospital report cards during the last 12 months, and more than eight out of ten surveyed respondents (84.75%) state that they perceived big differences in the quality of care between hospitals. Please note that all respondents acknowledge differences in hospital quality which is in line with the studies and reports mentioned in the introduction . Descriptive rating and ranking results The results regarding the importance of the five presented quality information items for the hospital choice for THR surgery (on a scale of 1–5, with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1=$$\end{document} 1 = not at all important and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5=$$\end{document} 5 = extremely important) show that the “Quality of treatment” is rated as the most relevant ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.76\pm 0.56$$\end{document} 4.76 ± 0.56 ), followed by “Equipment and qualification” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.73\pm 0.49$$\end{document} 4.73 ± 0.49 ) and “Number of cases treated” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.62\pm 0.70$$\end{document} 4.62 ± 0.70 ) (see Table ). In contrast, holding an “EndoCert Certificate” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.15\pm 0.85$$\end{document} 4.15 ± 0.85 ) and “Recommendation from other patients” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.96\pm 0.84$$\end{document} 3.96 ± 0.84 ) seem to be less important. The ranking results for the single most relevant information item reveal “Quality of treatment” (47.46%) and “Number of cases treated” (35.03%) as the most relevant, while “EndoCert Certificate” (9.60%), “Equipment and qualification” (6.21%), and “Recommendation from other patients” (1.69%) are stated less frequently. Model choice For single-class models, we find that the MNL model represents the data best with respect to all consulted information criteria. In addition, relative estimation results vary only marginally across single-class models (cf. Table A.3 of Supplementary Material). The same picture emerges with respect to multi-class models. Regarding the number of classes for the multi-class models, the BIC and CAIC point at using a single class only (cf. Table A.4 of Supplementary Material). Moreover, we find only a little variation of relative estimations even for multi-class models. For both LC and MM-MNL with two groups, for example, one group evaluates the number of cases treated higher and the other prefers recommendations, while all other attributes remain roughly as before (cf. Table A.5 of Supplementary Material). However, using two groups does not improve the data match sufficiently compared to the (single-class) MNL model such that the BIC and CAIC reject these models. Expanding to even more classes provides no further insights. This indicates that unobserved heterogeneity, either as idiosyncratic continuous error or in the form of clustered groups, is negligible in our sample. We conclude that additional model assumptions provide no meaningful information. Therefore, we hereinafter restrict our discussion to the MNL model. Findings from the DCE Table summarises the estimation results of the MNL model. Since we employ effect coding, all estimates are with respect to the (hypothetical) mean of the corresponding attribute, implying that the coefficient values within an attribute add up to zero. As shown, all estimates are highly significant. Consequently, all attributes are highly relevant for the hospital choice of the respondents. Overall, consumers prefer hospitals that achieve the quality targets for the treatment quality, as well as for equipment and qualification, and have above-average numbers for the cases treated and patient recommendations, as well as being certified as an EndoProstheticsCentre of Maximum Care (EPCmax). This pattern also emerges from Fig. , which depicts the corresponding level estimates. Figure illustrates the mean relative importance of attributes for hospital choice. Thereby, the range of coefficient values within an attribute reflects the importance of an attribute for the hospital choice. We compute the relative importance of each attribute using each attribute’s coefficient range (i.e. the difference between the coefficients of the highest and lowest level of each attribute), expressed as a share of the total range across attributes. As shown, patients consider the quality of treatment as the most important (26.96%; level range of 1.734), followed by the annual number of cases treated (24.78%; level range of 1.594). In contrast, holding an EndoCert Certificate (17.51%; level range of 1.126), reaching quality targets with respect to equipment and qualification (15.83%; level range of 1.018), and recommendations from other patients (14.93%; level range of 0.960) are less important. In total, 5,042 consumers of WL opened the survey link provided on WL. Of these, 4,678 stopped answering the survey directly after the short introduction page. Thus, 364 respondents returned the questionnaire. Clearly, the drop-out rate of 96.5% is rather high. However, our main target is to derive possibly accurate weights for a composite measure and therefore not necessarily to draw a representative sample which is likely to contain observations from users who do not have clear preferences or are unwilling to participate, as these we consider the primary source of noisy data. In the light of our objectives, we assume the drop-out rate not problematic. The following analysis reports only on the 177 respondents who fully completed the DCE part of the survey and provided consistent responses (48.63% completion rate). Table summarises the key characteristics of the sample. The median age of all respondents is 58 years, slightly more than half of all respondents are female (54.80%), and 68.93% stated (technical) university entrance qualification as their highest educational level. In addition, 53.56% of all participants report their health status to be good or better, and 54.80% claim that they suffer from a chronic condition. Finally, 58.76% of respondents state that they have used hospital report cards during the last 12 months, and more than eight out of ten surveyed respondents (84.75%) state that they perceived big differences in the quality of care between hospitals. Please note that all respondents acknowledge differences in hospital quality which is in line with the studies and reports mentioned in the introduction . The results regarding the importance of the five presented quality information items for the hospital choice for THR surgery (on a scale of 1–5, with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1=$$\end{document} 1 = not at all important and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5=$$\end{document} 5 = extremely important) show that the “Quality of treatment” is rated as the most relevant ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.76\pm 0.56$$\end{document} 4.76 ± 0.56 ), followed by “Equipment and qualification” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.73\pm 0.49$$\end{document} 4.73 ± 0.49 ) and “Number of cases treated” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.62\pm 0.70$$\end{document} 4.62 ± 0.70 ) (see Table ). In contrast, holding an “EndoCert Certificate” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.15\pm 0.85$$\end{document} 4.15 ± 0.85 ) and “Recommendation from other patients” ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3.96\pm 0.84$$\end{document} 3.96 ± 0.84 ) seem to be less important. The ranking results for the single most relevant information item reveal “Quality of treatment” (47.46%) and “Number of cases treated” (35.03%) as the most relevant, while “EndoCert Certificate” (9.60%), “Equipment and qualification” (6.21%), and “Recommendation from other patients” (1.69%) are stated less frequently. For single-class models, we find that the MNL model represents the data best with respect to all consulted information criteria. In addition, relative estimation results vary only marginally across single-class models (cf. Table A.3 of Supplementary Material). The same picture emerges with respect to multi-class models. Regarding the number of classes for the multi-class models, the BIC and CAIC point at using a single class only (cf. Table A.4 of Supplementary Material). Moreover, we find only a little variation of relative estimations even for multi-class models. For both LC and MM-MNL with two groups, for example, one group evaluates the number of cases treated higher and the other prefers recommendations, while all other attributes remain roughly as before (cf. Table A.5 of Supplementary Material). However, using two groups does not improve the data match sufficiently compared to the (single-class) MNL model such that the BIC and CAIC reject these models. Expanding to even more classes provides no further insights. This indicates that unobserved heterogeneity, either as idiosyncratic continuous error or in the form of clustered groups, is negligible in our sample. We conclude that additional model assumptions provide no meaningful information. Therefore, we hereinafter restrict our discussion to the MNL model. Table summarises the estimation results of the MNL model. Since we employ effect coding, all estimates are with respect to the (hypothetical) mean of the corresponding attribute, implying that the coefficient values within an attribute add up to zero. As shown, all estimates are highly significant. Consequently, all attributes are highly relevant for the hospital choice of the respondents. Overall, consumers prefer hospitals that achieve the quality targets for the treatment quality, as well as for equipment and qualification, and have above-average numbers for the cases treated and patient recommendations, as well as being certified as an EndoProstheticsCentre of Maximum Care (EPCmax). This pattern also emerges from Fig. , which depicts the corresponding level estimates. Figure illustrates the mean relative importance of attributes for hospital choice. Thereby, the range of coefficient values within an attribute reflects the importance of an attribute for the hospital choice. We compute the relative importance of each attribute using each attribute’s coefficient range (i.e. the difference between the coefficients of the highest and lowest level of each attribute), expressed as a share of the total range across attributes. As shown, patients consider the quality of treatment as the most important (26.96%; level range of 1.734), followed by the annual number of cases treated (24.78%; level range of 1.594). In contrast, holding an EndoCert Certificate (17.51%; level range of 1.126), reaching quality targets with respect to equipment and qualification (15.83%; level range of 1.018), and recommendations from other patients (14.93%; level range of 0.960) are less important. The objective of this study is to elicit patient preferences concerning publicly available hospital quality information for elective hip replacement surgery by conducting an onsite-survey on the German HRC Weisse Liste (WL). The findings may support us in developing weighted composite measures from the consumers’ perspective. We can draw several conclusions from our results. First, we can see that the DCE-based findings appear broadly consistent with the results of the rating- and ranking-based findings. This applies to both the order and relative importance of all quality information items. As shown, “Quality of treatment” is the most important information in all three analyses. For example, the DCE shows a relative importance of 26.95%, whereas 47.46% of the respondents ranked it in first place; it was also rated highest in terms of the mean importance, with a score of 4.76 (on a 1–5 scale with 1 = not at all important and 5 = extremely important). In addition, “Recommendations from other patients” appears to be the least important information in all three approaches. Here, the DCE shows a relative importance of 14.94%, whereas 1.69% of the respondents ranked it in first place; it was also rated lowest in terms of the mean importance, with a score of 3.96. We find slightly different results for the remaining three attributes, which, however, are not contradictory overall. For example, “Number of cases treated” is placed second with respect to the relative importance of all five attributes. Here, we compute a relative weight of 24.78% based on the DCE, and 35.03% of the respondents ranked it in first place as the single most important information item; however, the mean importance is rated 4.62 and thus is slightly lower than the mean importance of “Equipment and qualification”. Nevertheless, considering the standard deviations and the small differences of the means, the higher mean of “Equipment and qualification” from the rating-based approach may not be meaningful compared with the results from the ranking-based approach and the DCE-based findings. Second, the attributes for hospital choice for THR employed in this study – or joint replacement in general – have been utilised in other DCE studies, and thus appear to be of significant importance in general. For example, several studies include “Quality of treatment” . The “Number of cases treated” is also identified as a factor influencing hospital choice . In other studies, “Equipment and qualification” , “EndoCert Certificate” , and “Recommendations from other patients” are applied in a hypothetical hospital choice scenario. Subsequently, our study may incorporate the most important attributes for hospital choice, despite the absence of qualitative research to identify and select the most pertinent attributes for patient preferences in general . However, we select our attributes in accordance with the publicly reported quality information provided on WL (see above). Furthermore, we find evidence for the importance of the attributes of this study from more general approaches. Quality improvements of hospitals are associated with high case volume , certification or better equipment and experience of medical staff . Additionally, all but “Recommendation from other patients” are among the top ten relevant information items for hospital choice among THR patients in Germany . However, we would like to emphasise that all the studies mentioned above include a variety of attributes that may not be present in our setting or may have been used differently. For example, “Quality of treatment” might also contain adverse medical outcomes such as complication rates or readmission rates. In addition to this, “Equipment and qualification” might also cover the conduct of medical staff. Third, despite the differences in the general setting and combination of attributes, which do not allow for the direct comparison of findings, we may compare the relative importance of our attributes with findings from other research. In general, our results seem to be similar to those from other studies. “Quality of treatment” shows high importance in other studies . For example, Emmert et al. compare the preferences of both patients and referring physicians regarding hospital quality information for THR surgery. They demonstrate that traditional quality measures are rated most important (e.g. postoperative complication rates, one-year revision surgery). In the case of “Number of cases treated”, we find more heterogeneous results. While Emmert et al. and Kuklinski et al. report “Number of cases treated” to be a highly important attribute, Groenewoud et al. and Damman et al. determine a medium relevance yet still find it to be more important than patient recommendations or the qualifications of staff, respectively. With respect to the remaining attributes, our results demonstrate their lower importance for hospital choice. For example, “Equipment and qualification” , “EndoCert Certificate” , and “Recommendation from other patients” appear to be less important. Moreover, Kuklinski et al. find for knee replacement that patient recommendation is more important than certification with case volume being the most important quality indicator. Finally, it should be mentioned that our study is similar to the research of Emmert et al. , who analyse the preferences of hip replacement patients after surgery. Given that we recruited participants for the survey during their hospital search on WL, we expect that our results are more likely to reflect the preference picture before THR treatment. Both studies demonstrate that patients evaluate “Number of cases treated” as relatively important and holding an “EndoCert Certificate” as less relevant. Fourth, we find that the order of levels is consistent and as expected for all attributes. However, we find that the preference for EPCmax certificates is only 10.9% higher than that for EndoprostheticsCentre certificates (EPC). Similarly, Emmert et al. report relatively small differences between preferences for EPCmax and EPC certificates among both patients and physicians. We suspect that this may be due to patients not being aware of the differences between the two certificates. The certificates are issued by EndoCert Limited, which is a subsidiary of the German Society for Orthopaedics and Orthopaedic Surgery . The certification process requires the fulfilment of comprehensive quality objectives regarding structure, processes, and outcomes. Depending on which criteria are met, a hospital will receive one or the other certificate. For example, an EPC must treat at least 100 patients per year, whereas an EPCmax must treat at least 200 patients per year . Alternatively, the added value of higher certificate types may be of limited relevance. This would imply that patients may know the differences but may not care about the type of certificate a hospital holds as long as it provides one. However, at this point we can only speculate about the reasons for this finding. Since our findings are consistent with Emmert et al. , we recommend further research to explore the reason for this result in more detail. Fifth, we cannot detect various respondent groups with similar preferences. All considered multi-class models are inferior to the MNL model as the best model with respect to BIC and CAIC. The differences seem minor or implausible and do not make a case for dividing into sub-groups (cf. Tables A.4 and A.5 of Supplementary Material). Finally, as suggested by previous research, the results of our study on patient preferences can be used to develop weighted composite measures from the consumer perspective . Such a composite measure would aggregate existing information into one summary score, thereby improving the usability of a health report card like WL by reducing the complexity of the information provided. In this context, Schlesinger et al. or Emmert et al. show that reducing the complexity of report cards increases the quality of users’ hospital choices. Based on our findings, it may be possible to calculate the weights for a patient-centred composite measure that is publicly reported on WL. The next step would be to convert real hospital quality results into hospital-related composite measures. The preference for each hospital can then be estimated based on the sum of part-worth utilities for the selected level of all attributes . For example, a hospital that achieves quality targets regarding the treatment quality (coef. 0.985) and equipment and qualification (coef. 0.605), with above-average recommendations from other patients (coef. 0.384), above-average case numbers (coef. 0.718), and certification as an Endoprosthetics Centre of Maxmimum Care (coef. 0.388), would have an overall score of 3.080. Comparing this value for each hospital against the overall score for all hospitals, we can group hospitals into several performance groups . In this context, it seems highly relevant that our different models do not detect unobserved preference heterogeneity in our sample. This implies that it would be sufficient to compute one single weighted composite measure that corresponds to the preferences of all users. Otherwise, it would be appropriate to take into account specific individual or group characteristics. Our findings should be considered in light of some limitations. First, since we focus on German data from a German population subgroup (i.e. users of the German hospital report card WL), our findings and conclusions might be of limited importance for other countries. However, we think that our methodological approach and results may be of interest for countries with public reporting initiatives (e.g. the United States or United Kingdom). Second, the preferences of respondents derived from hypothetical hospital scenarios might differ from the actual search and selection behaviour in real life when people are confronted with similar decisions . Third, the choice of attributes is based on a pre-existing selection of variables provided on WL during the time of the experiment and not on qualitative pre-study research as recommended (e.g. ). Therefore, it is important to mention that other potentially relevant attributes are not considered in our study. We cannot exclude the possibility that integrating other information may lead to different findings. However, the selection of quality information on WL is based on publicly available quality measures from the patient’s perspective; a similar approach was published recently by other authors . Moreover, the chosen attributes are similar to those utilised by Emmert et al. , who derive the attributes by means of comprehensive literature research and qualitative research. In both studies, “Number of cases treated” and “EndoCert Certificate” appear, while our “Quality of treatment” and “Equipment and qualification” comprise their “Postoperative complication rate”, “Confirmed diagnosis rate”, “Mobility at hospital discharge”, and “Prevention of fall measures”. Only our attribute “Recommendation from other patients” is entirely novel. Fourth, for “Quality of treatment” and “Equipment and qualification” we model the level types as quality targets reached/not reached. This may lead to overestimating the relative importance of the two attributes since respondents do not need to wager for relative differences as e.g. with level types modelled as mean share of quality targets reached and relative deviations above and below. Yet, even if we overestimate these two, our results for relative importance are very likely to remain qualitatively robust with “Quality of treatment” and “Number of cases treated” being notably more important than the other attributes. Fifth, our results refer to patients for elective hip replacement, i.e. to treatments that focus on improving the quality of life. As shown by Kuklinski et al. , patients value quality information differently depending whether the treatment is life-saving or improves quality of life. Hence, our results may not be transferable to other indications as e.g. cancer or stroke where the primary goal of treatment is saving life. Furthermore, they may only add to a weighted composite measure for elective THR or similar indications where treatment is meant to improve the quality of life. Sixth, we need to emphasise that the external validity of our study is limited. Compared to patients of elective THR in 2023, our sample contains slightly too many of age below 50 years, slightly too few of age above 70 years and slightly too many male . In comparison to the general population of Germany, our sample contains higher educational attainment . These differences may be explained partially by regular web-users in Germany, who are younger, more male and better educated as well as by the intertemporal correlation between year of birth and education attainment . Compared to WL users in general, our sample is slightly older, less female and better educated . Taking into account that the major activities of WL next to THR are total knee replacement where patient structure is similar to THR and breast cancer with patients being more female and younger, differences regarding age and sex may narrow down with respect to THR patients on WL. However, our main target is to derive weights for a composite measure. This implies a trade-off between including a representative sample versus only those who see a benefit in participating, i.e., those who are aware of their preferences and eagerly return to use the composite measure. Therefore, the remaining deviations in representativeness seem tolerable. Finally, we cannot entirely exclude sample biases. However, we tried to mitigate the potential risk of biased data by collecting a sufficiently representative sample of adequate size, by providing anonymity for respondents to increase honesty and response quality as well as clear and neutral questions for easy understanding . This study provides new insights into the preference patterns of HRC users for hospital characteristics for patients undergoing elective THR surgery in Germany. Our results show that patients consider “Quality of treatment” and “Number of cases treated” as highly important. In contrast, “EndoCert Certificate”, “Equipment and qualification”, and “Recommendations from other patients” seem to be less important. We do not detect any meaningful heterogeneity in the preferences of all respondents. Based on our findings, the computation of a weighted composite measure may be the next step. Supplementary Material 1. |
Training in cytopathology in times of social distancing: a comparison of remote vs. traditional learning | 9f9332c1-fa72-48ae-b2cc-eece242286c4 | 8414736 | Pathology[mh] | Residency and fellowship training in cytopathology unexpectedly became challenging during this past year due to the novel coronavirus 2019 (COVID-19) pandemic. Not much was known at the onset but soon the genome of coronavirus SARS-CoV-2 was sequenced and public health measures such as “social distancing”, mask wearing, and hand washing were implemented to curb the transmission of disease. Social distancing was defined as a distance of at least 6 feet be maintained between any 2 individuals. This prevented gatherings and led to closures of many non-essential services and institutions. Hospitals and health care facilities maintained essential health care work and reduced elective procedures. These measures led to a marked decline in the volume of specimens received by anatomic pathology/cytopathology laboratories and decreased the number of fine-needle aspiration (FNA) procedures performed. The trainees in pathology—both residents and fellows—were required to stay at home either entirely or partly during the early period of the pandemic under the recommendation of the department of academic affairs. As a result, programs had to redesign their cytopathology fellowship and residency training programs, complying with local directives and regulations while maintaining high-quality education without risking trainee health. Herein we describe our department’s remote cytopathology training program developed in response to the COVID-19 pandemic. The issues identified due to the “stay at home” order were screening slides, case sign-out with faculty, 1-on-1 microscopic teaching sessions with cytotechnologists, performing FNAs, attending rapid onsite evaluation (ROSE), reviewing radiology images with radiologists, participating in multi-headed microscopic consensus conferences, sharing cases among peers, taking entry and exit slide tests, learning cytology preparation techniques in the wet laboratory, and any activity that was attended by more than 10 people congregating in a room such as quality- and management-related meetings or didactic lectures in conference rooms. Self-study by trainees played a major role in developing diagnostic skills and medical knowledge during remote training. To aide this, digital study sets were built utilizing the Aperio AT2 (Buffalo Grove, IL) whole slide scanner. Access was given to rotating residents and the cytology fellow using the Aperio E-slide manager to review whole slides. Trainees developed digital screening skills for gynecologic (GYN) and non-gynecologic (Non-GYN) slides. Sets of GYN and Non-GYN slides were scanned in and utilized as entry and exit tests. Online resources were provided covering a wide variety of topics in cytopathology. Our trainees availed the free webinars and lecture series that The American Society of Cytopathology made available during the pandemic. These lectures, having question-and-answer sessions, were immensely helpful with high educational value. The local cytology continuing medical education lecture series was conducted via Microsoft Teams (Microsoft, Redmond, WA) that participants joined remotely. Remote video conferencing via Microsoft Teams and desktop sharing helped in maintaining day-to-day interactions between trainees, cytotechnologists, and faculty. The rotation began with remote a video entry interview with the Senior Director of Cytopathology to discuss goals, expectations, structure of the changed program, ideas for projects, and the trainee’s end of rotation presentation. The trainee took the entry GYN and Non-GYN slide tests remotely, wherein they screened ten GYN and ten Non-GYN digital slides virtually via the E-slide manager. These were graded and reviewed via Microsoft Teams video conferencing and desktop sharing of live microscopic slide images with the Specialist Technologist of Education and trainee. Introductory microscopic slide sessions were given to the trainees via Teams and telecytology to review GYN and Non-GYN criteria for cytologic interpretations by the Specialist Technologist of Education. These microscopic sessions were continued by a team of cytotechnologists throughout the training on a case-by-case and on-demand basis. The trainee's responsibility included contacting assigned faculty to discuss tasks for the day. This involved signing out cases via telecytology, as well as discussion of the structured question of the day. A microscopic slide session of interesting cases was given daily by the Cytology Fellowship Program Director with trainees participating remotely. The daily cytopathology consensus conference could not be held at the multi-headed microscope. It was maintained via Teams and telecytology. Trainees joined in the discussions remotely to learn from these interesting and difficult cases. They learned the utility of ancillary studies, radiologic correlation and findings, importance of history and clinical impression, and developed interpersonal communication skills. The trainees were assigned 3 online “mock” board exams that were previously developed to help prepare for board examinations. We utilized older American Society for Clinical Pathology (ASCP) GYN and Non-GYN Digital Image Programs by converting the CD-ROM images of unknown cases and histories into a cloud format with worksheets to correspond to the photos. These were submitted by the residents weekly. To overcome the lack of hands-on training and performance of FNAs, didactic lectures were given by the Director of the FNA clinic on the basics of the ultrasound-guided FNA performance and interpretation of ultrasound images. Web sites to access videos of FNA techniques and simulations of other cytology procedures were provided . A simulated ROSE FNA experience was created through video conferencing and desktop sharing of live microscopic slides. Cytotechnologists performed real-time screening of known DiffQuik-stained FNA cytology slides that were projected to trainees. Trainees performed an evaluation of the case with determination of adequacy, triaging the specimen, and other pertinent questions particular to the case as if they were attending a ROSE telecytology procedure. Activities of the Cytology Preparatory Laboratory were covered via virtual tours, lectures, video conferencing, and telephone meetings with the laboratory supervisors and managers. Telephone sessions and video chats were also held by the Cytology Management Team to discuss quality metrics, quality assurance, quality improvement, laboratory regulations, and lab management. At the end of the rotation, the residents presented a half-hour lecture on a predetermined topic of interest to the Cytopathology faculty and cytotechnologists via Teams. They took an exit exam like the entry exam, which was then reviewed with them via Teams and telecytology. Grades for entry test, exit test, and the ASCP GYN and Non-GYN digital image workbooks were noted and submitted to the Senior Director for an exit interview. The trainees filled out a survey at the end of their rotation to give feedback on their experience with remote learning in comparison to in person learning. Overall, our experience was similar to that reported by others. , , , Eight trainees (4 postgraduate year [PGY]-4, 3 PGY-3 and 1 PGY-2 residents) participated in evaluating the remote learning program. The GYN and Non- GYN exit exam results for the virtual slide vs. real slide for the most promising of the PGY-4 residents were 75% and 70% vs. 77.5% and 85%, showing a slight decrease in scores. The online assignments showed the performance of the same resident to remain at the same level. In tallying the surveys we found that many trainees felt that the amount of work received was comparable to that of pre-remote learning, that they learned about the same to more than previous in-person learning, and that they would like to continue to receive their work virtually. They found the workload easier to manage with having both of the options that is to work remotely and in-person learning. Advantages of the remote learning experience were that it allowed trainees to have more control of their learning experience, improved time management, and allowed more time for studying and research. Trainees could concentrate on projects and academic activities as they did not have their daily commute or non-academic activities. One of the major limitations experienced throughout the remote learning process was screening whole-slide images. Scanning cytology slides that require multilayer focusing led to prolonged screening. As case volumes had markedly diminished at the beginning of the pandemic, trainees became heavily dependent on online resources. Internet access was limited due to carrier coverage or access limitations placed on personal computers by the hospital. Communication issues via email and online conferencing posed another hurdle when Teams was first introduced. It took some time for users to become familiar with the system, set up meetings, chat, and participate in remote conferencing. Lack of physical screening, no access to the physical patient, and lack of experience with radiologists were disadvantages to the trainees, hindering their ability to appreciate and understand cell morphology and radiologic images. It was difficult for trainees to receive immediate feedback or mark digital slides for clarifications and questions that often come up while screening slides. Lack of face-to-face communication poses a barrier to building interpersonal skills, such as reading body language and facial expressions. We found that remote learning allowed our institution to continue teaching trainees without severely compromising the education of the trainees when compared to traditional learning, while allowing for appropriate social distancing and the ability to adhere to public health mandates, at the same time forcing faculty and trainees to embrace the virtual space and online educational content. Virtual learning allowed trainees to learn at their own pace, focus on areas of weakness, and increase academic productivity by working on projects and reading. Our residents and fellow were able to develop and improve skills in screening digital slides, evaluating images for determination of adequacy via telecytology, and reviewing online images. Screening whole digitalized cytology slides remains challenging. Some institutions that are converting to an all-digital workflow mention digitalizing cytology cell block slides rather than smear slides. Tumor boards continue to be held remotely, with our trainees sharing live slides or projecting scanned digital images of cases that are discussed, thus saving time in not having to prepare elaborate presentations. Scanning slides led to the creation of an online digital slide library to be utilized not only for study purposes by trainees but also as a resource for cytotechnologists and faculty. Consensus conferences are held both in person and virtually, allowing offsite residents and faculty to participate. Inter- and intradepartmental meetings continue to be virtual with increased participation at all levels. Resident end of rotation presentations are held in the virtual platform. Residents not rotating is cytopathology are able to participate without leaving their rotations. As a result of our feedback from our evaluations and our experiences, we have partially instituted remote learning into our academic curricula and developed a hybrid program. Although remote learning cannot replace physical learning, it can be used in conjunction with physical learning, opening more doors for expanded learning techniques and help in keeping us prepared for the next unprecedented challenge. Through this experience, we have developed patience, understanding, compassion, and empathy for each other, which we intend to continue to put into practice after the pandemic ends. |
Prevalence and Risk Factors for Malignant Nodal Involvement in Early Esophago-Gastric Adenocarcinoma | 1dc19588-8c5a-456f-afc6-47e59fe9ce39 | 11809703 | Surgical Procedures, Operative[mh] | Study Design and Setting CONGRESS (endosCopic resectiON, esophaGectomy or gastrectomy foR Early esophagogastric cancerS) was conducted as a multicenter retrospective cohort study with a structure modeled on, and methodology developed in partnership with members of, previous international research collaboratives. , , A database capturing diagnostic, demographic, treatment, and outcome variables was designed and piloted by a multidisciplinary steering group that included surgeons, oncologists, gastroenterologists, and methodologists. This database was transcribed to an online platform for anonymised data submission (Research Electronic Data Capture, REDCap). Open invitations to participate were circulated via specialist societies, social media, and personal communication with a predominant focus on UK centers; EG cancer management in England and Wales is centralized to 35 tertiary hospitals. In addition to UK centers, 1 Swedish center (Karolinska Institutet) also took part. Each center’s local lead was responsible for ensuring their own relevant ethical permissions and study registration to comply with local protocols. Inclusion Criteria Patients diagnosed 2015 to 2022 (inclusive) were eligible for inclusion, with the follow-up period extending until the database closure date in July 2023. The aim was to capture outcomes for all patients who received treatment with curative intent for T1N0 cancer, based on available staging. To capture real-world outcomes for T1N0 disease, the allowed diagnostic criteria were pragmatic and included any patients with cT1N0 undergoing curative therapy, as well as patients undergoing surgery or ER for high-grade dysplasia (HGD) with subsequent pT1N0—as these would also be subject to the same decision-making regarding subsequent surveillance or surgery. Only patients with a histological diagnosis of adenocarcinoma, or initially columnar dysplasia, were included in this analysis. Patients who received palliative or no treatment for any reason were excluded. Data Capture Demographic data included age, gender, and comorbidities. Initial diagnostic data and up to 3 treatment rounds were captured to account for patients who may have had initial ER followed by additional treatment (including repeat ER, surgery, or oncological therapy). Clinical and survival outcomes were recorded. Statistical Analysis Patient who underwent surgery were compared with those that did not using appropriate nonparametric statistical tests (χ 2 and Kruskal-Wallis). Predictive factors for LNM were compared with final surgical specimen pathology. To assess differences between LNM risk for ER and surgical specimen-based staging within the same patient group, we compared histopathological findings, along with corresponding LNM risk, for the patients who underwent ER followed by surgery. Where the surgical specimen contained no residual primary tumor, endoscopic pathological results were used. Multivariable regression was performed to assess the feasibility of a predictive model for LNM risk following ER based on demographic, clinical and pathological variables. Factors affecting overall survival were assessed using multivariable Cox regression analysis. Missing data were handled by multiple imputation by chained equations. P <0.05 was considered statistically significant. STROBE guidelines were adhered to in reporting of results (Supplemental Data Appendix 1, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). CONGRESS (endosCopic resectiON, esophaGectomy or gastrectomy foR Early esophagogastric cancerS) was conducted as a multicenter retrospective cohort study with a structure modeled on, and methodology developed in partnership with members of, previous international research collaboratives. , , A database capturing diagnostic, demographic, treatment, and outcome variables was designed and piloted by a multidisciplinary steering group that included surgeons, oncologists, gastroenterologists, and methodologists. This database was transcribed to an online platform for anonymised data submission (Research Electronic Data Capture, REDCap). Open invitations to participate were circulated via specialist societies, social media, and personal communication with a predominant focus on UK centers; EG cancer management in England and Wales is centralized to 35 tertiary hospitals. In addition to UK centers, 1 Swedish center (Karolinska Institutet) also took part. Each center’s local lead was responsible for ensuring their own relevant ethical permissions and study registration to comply with local protocols. Patients diagnosed 2015 to 2022 (inclusive) were eligible for inclusion, with the follow-up period extending until the database closure date in July 2023. The aim was to capture outcomes for all patients who received treatment with curative intent for T1N0 cancer, based on available staging. To capture real-world outcomes for T1N0 disease, the allowed diagnostic criteria were pragmatic and included any patients with cT1N0 undergoing curative therapy, as well as patients undergoing surgery or ER for high-grade dysplasia (HGD) with subsequent pT1N0—as these would also be subject to the same decision-making regarding subsequent surveillance or surgery. Only patients with a histological diagnosis of adenocarcinoma, or initially columnar dysplasia, were included in this analysis. Patients who received palliative or no treatment for any reason were excluded. Demographic data included age, gender, and comorbidities. Initial diagnostic data and up to 3 treatment rounds were captured to account for patients who may have had initial ER followed by additional treatment (including repeat ER, surgery, or oncological therapy). Clinical and survival outcomes were recorded. Patient who underwent surgery were compared with those that did not using appropriate nonparametric statistical tests (χ 2 and Kruskal-Wallis). Predictive factors for LNM were compared with final surgical specimen pathology. To assess differences between LNM risk for ER and surgical specimen-based staging within the same patient group, we compared histopathological findings, along with corresponding LNM risk, for the patients who underwent ER followed by surgery. Where the surgical specimen contained no residual primary tumor, endoscopic pathological results were used. Multivariable regression was performed to assess the feasibility of a predictive model for LNM risk following ER based on demographic, clinical and pathological variables. Factors affecting overall survival were assessed using multivariable Cox regression analysis. Missing data were handled by multiple imputation by chained equations. P <0.05 was considered statistically significant. STROBE guidelines were adhered to in reporting of results (Supplemental Data Appendix 1, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). A total of 1841 patients from 26 centers were included. Median follow-up was 32 months (IQR 14–53). Further analysis was confined to patients with confirmed adenocarcinoma or columnar type high-grade dysplasia (HGD), giving a cohort size of 1601 (1197 adenocarcinoma, 404 HGD). Data collection was good, with low rates of data missingness: <1% in initial staging and surgical outcome data. Missingness of endoscopic resection pathology data was <1% for tumor depth, differentiation, and presence of signet cells, 11% for LVI. Initial clinical staging (Table ) for these patients was T1 in 978 (61.1%), TX for 348 (21.8%) and T0 or HGD in 274 (17.1%). Initial staging investigations performed were variable and more common in patients with confirmed adenocarcinoma, and included CT scan (73%), positron emission tomography (30.5%), endoscopic ultrasound (31.5%), and staging laparoscopy (6.5%). Initial management of these patients was predominantly endoscopic resection (1285/1601, 80.3%), of which 217/1285 (16.9%) of patients went on to have surgery based on high-risk features or patient preference (Fig. ). A total of 271/1601 (16.9%) of patients were primarily managed with surgery. Where a reason was given, most patients who went straight to surgery were either deemed not endoscopically resectable (170/270 valid responses, 62.9%), or in a small number of cases underwent primary surgery due to patient choice (22/270, 8.1%). Ultimately, 497 patients (31.0% of all patients) with clinically early disease at presentation underwent radical surgery. Patient Demographics Median patient age was 71 years. Patients were predominantly male, with distal esophageal tumors (Table ). When comparing patients undergoing surgery versus those who did not, surgical patients were more likely to be younger (68 vs 72 y, P <0.001), have a Charlson comorbidity score of 0 (59.2% vs 46.5%, P <0.001), and had more advanced tumors, with a greater proportion of surgical patients demonstrating poorly differentiated tumor cells (23.3% vs 8.5%, P <0.001), present signet cells (8.9% vs 2.1%, P <0.001), or less favorable cT stage (T1b 22.3% vs 9.1%, P <0.001). Procedural Outcomes Following endoscopic resection, no complications were reported in 1198/1285 (93.2%) of cases. The most common reported complications included bleeding in 42/1285 (3.3%) and perforation in 11/1285 (0.9%) of cases. Considering patient outcomes after radical surgery, where outcome data were available, complications occurred in 284/453 patients (62.7%), which were of Clavien-Dindo grade 1 or 2 in 168 patients (37.0%), grade 3a in 41 (9.1%), 3b in 28 (6.1%), 4 in 38 (8.4%). In-hospital mortality was 2.0% (9 patients). The median length of stay was 10 days (IQR 7–17.75), and a median of 22 lymph nodes were harvested in each case (IQR 15–31). Predictors of Lymph Node Metastasis The overall rate of surgical specimen LNM was 67/497 (13.5%). As expected, more advanced nodal stage corresponded to worse survival ( P =0.006, Supplemental Data Appendix 2, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). Assessing histopathology of all surgical specimens, where recorded, LVI was present in 85/473 cases (17.9%). Tumor cell differentiation was poor in 107/459 (31.8%). Signet ring cells were present in 47/494 (9.5%). When comparing ER-based and subsequent surgical pathological staging variables for the subset of patients who underwent surgery after ER, there was significant discordance between endoscopic and surgical staging. Of the patients who underwent ER followed by surgery, 110 patients had R0 resection with complete pathological data, of which 40 (36%) had subsequent surgical specimen pathology exhibiting discordant T stage or LVI status (Fig. ). The overall rate of LNM for this group was 13.8% (30/217). The rate of LNM across varying T stages was T1a=9.8%, T1b sm1=14.8%, T1b sm2-3=17.9%. Significant rates of LNM were seen in patients without any reported ER pathological risk factors (ie, patients without any of the following risk factors: positive deep or circumferential ER margins, poor differentiation, or T1b sm2 or greater depth of invasion), 8/52 (15.3%). Comparing rates of N+ between ER and surgical specimens (Table ), analyses of surgical specimens returned less advanced tumors (lesser T stage, lower incidence of poorly differentiated tumors or LVI) ( P <0.001 for all comparisons), but higher rates of LNM, suggesting overall potential understaging in surgical specimens. Cross tabulated by ER-derived LVI, differentiation, and T stage, the number of patients for each subgroup were low; surgical specimens positive for LNM were, however, seen across almost all groups (Fig. ). Multivariable regression analysis to derive a prognostic model for LNM based on histological and demographic variables resulted in a statistically nonsignificant model with poor calibration (Supplemental Data Appendix 3, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). Cox Regression Analysis for Overall Survival For all patients, after adjusting for age, sex, Charlson comorbidity score, tumor site, histological subtype, differentiation, Barrett, presence of signet cells and cT stage before treatment (Supplemental Data Appendix 4, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ), age (HR 1.07, 95% CI: 1.05–1.09, P <0.001) and Charlson score 0 (0.57 (0.44–0.74), P <0.001) were significantly associated with survival. In terms treatment variables, surgery was not significantly associated with survival advantage for unselected patients (HR 0.72, 95% CI: 0.50–1.03, P =0.070). Analyzing only patients who underwent ER, after adjusting for demographic and disease variables (Table ), a significant survival benefit was seen for patients undergoing surgery [HR 0.33 (0.15–0.77), P =0.010], with poorer survival for older patients [1.08 (1.05–1.011), P <0.001 and 0.001] and positive ER circumferential margin [2.51 (1.47–4.29), P =0.001]. Median patient age was 71 years. Patients were predominantly male, with distal esophageal tumors (Table ). When comparing patients undergoing surgery versus those who did not, surgical patients were more likely to be younger (68 vs 72 y, P <0.001), have a Charlson comorbidity score of 0 (59.2% vs 46.5%, P <0.001), and had more advanced tumors, with a greater proportion of surgical patients demonstrating poorly differentiated tumor cells (23.3% vs 8.5%, P <0.001), present signet cells (8.9% vs 2.1%, P <0.001), or less favorable cT stage (T1b 22.3% vs 9.1%, P <0.001). Following endoscopic resection, no complications were reported in 1198/1285 (93.2%) of cases. The most common reported complications included bleeding in 42/1285 (3.3%) and perforation in 11/1285 (0.9%) of cases. Considering patient outcomes after radical surgery, where outcome data were available, complications occurred in 284/453 patients (62.7%), which were of Clavien-Dindo grade 1 or 2 in 168 patients (37.0%), grade 3a in 41 (9.1%), 3b in 28 (6.1%), 4 in 38 (8.4%). In-hospital mortality was 2.0% (9 patients). The median length of stay was 10 days (IQR 7–17.75), and a median of 22 lymph nodes were harvested in each case (IQR 15–31). The overall rate of surgical specimen LNM was 67/497 (13.5%). As expected, more advanced nodal stage corresponded to worse survival ( P =0.006, Supplemental Data Appendix 2, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). Assessing histopathology of all surgical specimens, where recorded, LVI was present in 85/473 cases (17.9%). Tumor cell differentiation was poor in 107/459 (31.8%). Signet ring cells were present in 47/494 (9.5%). When comparing ER-based and subsequent surgical pathological staging variables for the subset of patients who underwent surgery after ER, there was significant discordance between endoscopic and surgical staging. Of the patients who underwent ER followed by surgery, 110 patients had R0 resection with complete pathological data, of which 40 (36%) had subsequent surgical specimen pathology exhibiting discordant T stage or LVI status (Fig. ). The overall rate of LNM for this group was 13.8% (30/217). The rate of LNM across varying T stages was T1a=9.8%, T1b sm1=14.8%, T1b sm2-3=17.9%. Significant rates of LNM were seen in patients without any reported ER pathological risk factors (ie, patients without any of the following risk factors: positive deep or circumferential ER margins, poor differentiation, or T1b sm2 or greater depth of invasion), 8/52 (15.3%). Comparing rates of N+ between ER and surgical specimens (Table ), analyses of surgical specimens returned less advanced tumors (lesser T stage, lower incidence of poorly differentiated tumors or LVI) ( P <0.001 for all comparisons), but higher rates of LNM, suggesting overall potential understaging in surgical specimens. Cross tabulated by ER-derived LVI, differentiation, and T stage, the number of patients for each subgroup were low; surgical specimens positive for LNM were, however, seen across almost all groups (Fig. ). Multivariable regression analysis to derive a prognostic model for LNM based on histological and demographic variables resulted in a statistically nonsignificant model with poor calibration (Supplemental Data Appendix 3, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ). For all patients, after adjusting for age, sex, Charlson comorbidity score, tumor site, histological subtype, differentiation, Barrett, presence of signet cells and cT stage before treatment (Supplemental Data Appendix 4, Supplemental Digital Content 1, http://links.lww.com/SLA/F289 ), age (HR 1.07, 95% CI: 1.05–1.09, P <0.001) and Charlson score 0 (0.57 (0.44–0.74), P <0.001) were significantly associated with survival. In terms treatment variables, surgery was not significantly associated with survival advantage for unselected patients (HR 0.72, 95% CI: 0.50–1.03, P =0.070). Analyzing only patients who underwent ER, after adjusting for demographic and disease variables (Table ), a significant survival benefit was seen for patients undergoing surgery [HR 0.33 (0.15–0.77), P =0.010], with poorer survival for older patients [1.08 (1.05–1.011), P <0.001 and 0.001] and positive ER circumferential margin [2.51 (1.47–4.29), P =0.001]. CONGRESS represents the largest known granular data set (containing detailed demographic, disease, and outcome data) for early EG cancer to date, presenting contemporary management strategies and real-world outcomes for 26 centers. The overall rate of LNM of 13.5% is higher than has been reported in previous series, with one pooled analysis suggesting rates of 4% for T1a and 23% for T1b disease. Previously reported histological risk factors such as LVI, T stage, and cell differentiation grade did not exhibit clear association when comparing ER staging to LNM risk or overall survival. In multivariable Cox regression analysis, surgery was associated with a strong survival benefit after primary ER. In modern clinical practice, the need to decide between either organ-preserving or surgical therapy is predominantly informed by ER-based pathological staging. Predictive models for LNM should therefore be based upon ER specimen pathology, rather than surgical pathology, if they are to be valid. Published prediction models for LNM in T1b cancer exemplify the limitations of existing reports in that such studies are often based on surgical specimens (rather than ER), low numbers, and outdated practice: Lee et al reported a risk prediction system for LNM in T1 disease based upon 258 surgical specimens from 5 institutions over 11 years (2000–2011), whereas Gotink et al included 248 patients treated predominantly with primary surgery between 1989 and 2016. These data should therefore not be used to predict LNM based on ER. ER specimens are assessed differently (with typically smaller slices prepared for analysis and therefore closer scrutiny); surgical specimens after ER may also contain multifocal or residual disease with different final staging. The known variability in the assessment of surgical specimens is thought to further contribute to potential understaging of disease if considering surgical specimens alone. In the present data set, despite large patient numbers, we did not find a significant association between pathological variables and LNM risk. This is a substantial finding which calls into question the validity of preoperative counseling of patients based on initial histopathological data. It also calls into question the validity of previous risk assessment studies, many of which were based upon surgical pathology rather than ER. It may be that the increasingly recognized heterogeneous nature of EG cancer, and interaction of multiple risk factors, means that it is difficult or impossible to accurately predict this from endoscopic specimens. It is equally possible, however, that discordance in this this real-world multicenter data set instead reflects the known variability in staging workup discordant reporting between pathologists. – The variability of staging investigations also suggests that improvements in the standardization of workup of these patients may be required, which could improve the pretreatment detection of nodal metastases, though some reports have highlighted that CT and positron emission tomography-CT may be of limited sensitivity or utility in HGD or early cancer. Differences in specimen preparation and pathologist variation may mean that relevant risk factors may be more or less likely to be identified on pathological assessment. Current UK guidelines recommend joint assessment of specimens by 2 pathologists, one of who should be a gastrointestinal specialist, for Barrett’s dysplasia only; data on the number and specialist interest of pathologists involved in the CONGRESS specimens was not collected. The number of T1a tumors which went forward to surgery also reflects the importance of other nonpathological considerations which might influence the need for surgery, such an endoscopic appearances or multifocal disease, which were not recorded in this data set. Other factors such as variable time intervals (and potentially resulting tumor progression) between ER and surgery may further complicate pathology-based risk assessment. These issues highlight the need for detailed and prospective study to better understand factors associated with LNM risk. Without a standardized, reliable assessment of specimens, it may not be possible to adequately assess individual patient prognosis based on ER alone. This inability to identify a significant pattern of multifactorial adjusted risk factors, both for LNM in regression analysis as well as overall survival in Cox regression, in the CONGRESS data set, further complicates the existing dilemma of how to counsel patients with clinically staged early EG cancer. First, over 10% of all patients with tumors that were endoscopically staged T1b sm1 or less, traditionally thought to be the lowest risk group, were found to have LNM at surgery, without any significant association to LVI or tumor differentiation status. Should all patients with early EG cancer, regardless of stage, be offered surgical consultation? Second, what is an acceptable risk for LNM? The American Gastroenterological Association alludes to a “minimal chance” of LN or distant metastasis as representing <2% and that a perceived risk above this should be considered for surgery. Justification of such a threshold is sometimes given as surgery being considered a viable option if the risk of LNM outweighs the risk of mortality after esophagectomy. However, this discounts the significant consequences of undergoing surgery, such as at least temporarily decreased quality of life, and does not consider the fact that patients may accept differing degrees of risk. Some patients may desire maximal risk reduction, or to avoid the recurrent interventions and potential anxiety associated with surveillance, and be more likely to request surgery. In contrast, other patients may wish instead to maximize their quality of life with organ-preserving treatment (ER), and may accept a potentially discounted life expectancy in return for avoiding surgery even in higher-risk tumors. , There remains also a question about the potential efficacy of adjuvant therapies such as chemotherapy, radiotherapy, or brachytherapy, which could potentially further contribute to disease control after ER in early EG cancer. The findings of this study, that surgery is significantly associated with improved overall survival in early EG cancer after ER, is at odds with some other published findings. Tankel et al reported equivalent survival for surveillance and esophagectomy after ER of high risk T1b esophageal cancer; however, this was in a small patient group, with only 27 patients in the observation group over a study period of 11 years (2012–2022). Kamarajah et al reported a large US (National Cancer Data Base, NCDB) analysis of patients with T1a and T1b esophageal cancer; ER had equivalent long-term survival compared with primary surgery after propensity score matching for demographic and disease variables. However, the database does not account for differences in ER and surgical staging. Surgical specimens were therefore potentially understaged compared to ER, and represented in fact more advanced disease (and thus benefitted from esophagectomy). An analysis by the same group of gastric cancer data found that ER was inferior to surgery for gastric cancers. Data for CONGRESS were entered through a collaborative group model based on previously successful studies with many of the same personnel; , levels of data missingness were low and surgical outcome data entered into the CONGRESS database closely mirror those reported in the compulsory UK national EG cancer database (NOGCA), strongly supporting the internal validity of this data set. In multivariable Cox regression analysis, CONGRESS data have suggested that surgery after ER is associated with improved overall survival—however, significant differences between groups and an absence of cause of death or recurrence data mean that some of this difference may result from incomplete adjustment within the regression model, highlighting the need for prospective study. Furthermore, the lack of data on disease-related mortality or recurrence type (local, nodal, or systemic), limits some of the conclusions that can be drawn. The relatively high incidence of LNM even in ER-staged low risk groups is surprising, and suggests the possible presence of additional risk factors not captured here or addressed in current guidelines. Despite the large numbers of included patients, cross tabulated analysis of histological subgroups also resulted in small numbers for each group when assessing for LNM risk which suggests a sample size limitation for statistical analysis. The real-world and contemporaneous nature of these data, however, strongly supports the generalizability of the reported findings to daily clinical decision-making in current practice. Novel treatments such as sentinel lymph node biopsy may in future offer an alternative to radical surgery, but are not yet proven. Patients with early EG cancer, along with their clinicians, face a dilemma when it comes to deciding on the optimal treatment modality. Based upon this large predominantly UK-based data set, the risk of LNM appears greater, and less predictable in current practice, than previously reported. Many of these findings are discordant with currently accepted evidence, suggesting an urgent need for re-evaluation of staging, treatment, and quality control processes. These data should be used to inform joint decision-making, and highlights the need for urgent prospective study. Tarig Abdelrahman, Khalid Akbari, Leo Alexandre, Hasan Ali, Bilal Alkhafaff, Anuradaha Alwis, Antonios Athanasiou, Evan Best, Khalid Bhatti, Nick Bird, Alex Boddy, Matt Bonomaully, Amir Botros, Leo Brown, Benjamin Byrne, Richard Byrom, Beatriz Carrasco Aguilera, David Chan, Carissa Choh, Hollie Clements, Peter Coe, Lauren Crocker, Andrea Cross, Vinutha DayaShetty, Niell Dempster, Alexander Dermanis, Massimiliano Di Pietro, Simon Dwerryhouse, Ahmed Elshaer, Nada Elzahed, Sarah Epton, Matthew Forshaw, Nana Gael, Lewis Gall, Ismael Ghazzi, Leeying Giet, Hasan Haboubi, George Hanna, Paul Healy, Jonathan Hoare, Sung Hong, Faisal Ibrahim, Anchal Jain, Chenchen Ji, Courtney Johnson, Sharib Khan, Frederik Klevebro, Bhaskar Kumar, Jie Li, Steven Lindley, Anantha Madhavan, Ash Mahendran, Henrik Maltzmann, Michel Martin, Sotiris Mastoridis, Euan McLaughlin, David Mitton, Krishna Moorthy, Magnus Nilsson, Robert O’Neill, Mervyn Owusu-Ayim, Sally Pan, Simon Parsons, Pradeep Patel, Ian Penman, Abeera Pervez, Chris Peters, Shaun Preston, Oliver Priest, Tom Ritchie, Ioannis Sarantitis, Negar Sharafi, Katie Siggens, Aayush Sinha, Richard Skipwowrth, Naim Slim, Maria Soupashi, Sophie Stephens, Jennifer Straatman, Jav Sultan, Cheuk-Bong Tang, Nav Thavanesan, Mie Thu, Paul Turner, Bhamini Vadhwana, Ravi Vohra, Shajahan Wahed, Michael White, Thomas Whittaker, Vincent Wong, Susannah Woodrow, Sebastian Zeki. |
Pour une communication basée sur la culture en santé ( | 40d3b9ba-8f7b-4648-b8fd-b7b2b99f8369 | 9557826 | Health Communication[mh] | Voici une vingtaine d'années, des médecins et spécialistes de santé publique américains, britanniques et australiens ont avancé le concept de « health literacy » qui est aujourd'hui l'objet de nombreux travaux dans des pays anglophones mais, malheureusement, encore peu connu dans les pays francophones, particulièrement en France et encore davantage dans les pays francophones africains. Le concept est né du constat « de l’échec de programmes antérieurs d’éducation à prendre en compte les déterminants sociaux et économiques de la santé […] ce qui a conduit à sous-estimer le rôle potentiel de l’éducation à la santé », a expliqué le professeur de santé publique australien Nutbeam . « Des campagnes qui se concentraient uniquement sur la transmission de l'information sans prendre en compte la condition sociale et économique des individus n'atteignaient pas leurs objectifs . » La définition généralement reprise de health literacy est celle formulée par l'OMS en 2009 : « Cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand and use information in ways which promote and maintain good health » , que nous pouvons traduire par « les compétences intellectuelles et sociales qui déterminent la motivation et la possibilité qu'ont les personnes à obtenir, comprendre et utiliser l'information de façon à favoriser et conserver une bonne santé ». Le nombre d'articles consacrés à la health literacy ou la mentionnant a explosé depuis une dizaine d'années (Fig. ) 1 . Aux États-Unis, les universités de Harvard, de l'Arkansas, du Maryland, Emory, de l'Ohio et du Kentucky, pour ne citer que quelques exemples, ont des départements ou des laboratoires de health literacy , tandis que l'Académie nationale des sciences, de l'ingénierie et de la médecine anime un bureau de la health literacy . Il en est de même dans des universités ou des instituts du Québec, de Sydney, de Maastricht, de Vienne, de Varsovie, de Dublin, d'Athènes, de Bilthoven (Pays-Bas), de Murcia (Espagne), de Sofia ou de Louvain. Des gouvernements ont mis en place des comités, des bureaux ou des programmes en charge de la health literacy , comme au Canada, en Australie, en Grande-Bretagne ou aux États-Unis. Si des auteurs pouvaient écrire dès 2009 que « la health literacy est devenue un phénomène international », le phénomène s'arrêtait encore aux portes de la francophonie. Il y a enfin pénétré depuis quelques années, davantage en Suisse et en Belgique qu'en France. Santé Publique France propose aujourd'hui une dizaine de documents sur cette thématique, dont la majorité date de 2017. Un Réseau francophone de « littératie en santé » regroupe une cinquantaine de chercheurs, tous rattachés à des organismes européens. Malgré cela, 110 seulement des 12 906 articles publiés jusqu'en 2022 et contenant « health literacy » ou « littératie » dans leur résumé ou leur texte ont au moins un de leurs auteurs affilié à un organisme français 2 . Le contenu du concept de health literacy a toujours été ambigu. Alors que Nutbeam le considérait comme une façon de « prendre en compte la condition sociale et économique des individus » et que la définition de l'OMS mentionne « les compétences intellectuelles et sociales », un rapport du Comité sur la Health literacy de l'Association médicale américaine publié en 1999 n’évoquait ni la condition sociale, ni la condition économique et se concentrait uniquement sur « la capacité à lire et comprendre les notices de médicaments, les ordonnances, les compte rendus médicaux et tout le matériel médical nécessaire pour que le patient réussisse son parcours », ce qu'on appelle la health literacy fonctionnelle . « Cette définition étroite de la health literacy laisse de côté une grande partie de la signification et du but de la connaissance », a écrit Nutbeam, car il existe « différents “types” de literacy et d'utilisation de celle-ci dans la vie quotidienne » . Pour Rudd et al. , cette définition « encourage une myopie concentrée sur le manque de maîtrise de la lecture et de l’écriture et ignore les barrières érigées par la culture, la langue et les conceptions du personnel de santé » . « La health literacy ne dépend pas des compétences en écriture et en lecture, comme cela a été démontré dans des pays où l'analphabétisme est élevé », ont souligné les professeurs d’éducation à la santé Diane Levin-Zamir et Jane Wills . Voici 20 ans, les uns appréhendaient donc l'individu dans son cadre social, les autres dans sa relation avec les documents émanant du corps médical. Les nombreux programmes de health literacy lancés depuis relèvent le plus souvent de la seconde interprétation, notamment en France. Différentes traductions françaises de health literacy ont été proposées comme « culture sanitaire » , restrictive car elle évoque l'hygiène davantage que la santé, « compétence informationnelle en santé », à la fois complexe et restrictive car il ne s'agit pas seulement d'information, ou encore « compétences en matière de santé », relativement fidèle à l'anglais mais, elle aussi, restrictive . L'expression la plus utilisée est « littératie en santé », que des chercheurs de l'université du Québec définissent comme « l a capacité d'une personne à comprendre et à utiliser le langage, les chiffres, les images et les technologies afin d’échanger, d'interagir avec les autres, de saisir son environnement, d'acquérir de nouvelles connaissances, de développer son plein potentiel et d’être un citoyen à part entière » , tandis que Santé Publique France écrit : « par “littératie en santé”, l'on entend le résultat de l'interaction entre les capacités d'une personne à reconnaître son besoin d'information en matière de santé, à trouver cette information, à la comprendre et à l'utiliser pour prendre des décisions éclairées sur sa santé, et les exigences du système de santé » . La littératie en santé est généralement considérée dans la perspective de l’éducation thérapeutique, comme une intervention destinée à améliorer « les conditions d'une interaction satisfaisante avec les malades » . L’éducation thérapeutique fait l'objet de nombreuses études et programmes, dans le cadre d'une démarche qui place le malade au centre de l'organisation des soins, cherchant à transformer la passivité du « patient » en une gestion active de sa santé. La littératie a trouvé sa place comme cadre théorique de ces études, mais le concept de health literacy , forgé en réaction aux échecs de campagnes de santé publique, va au-delà de la relation soignants-patients de l’éducation thérapeutique. L'acception étroite du concept ne peut servir de cadre pour comprendre les déterminants des comportements en santé publique et concevoir des stratégies de communication pour influencer ces derniers. Nous travaillons donc avec l'acception élargie de health literacy proposée en 2012 par Rudd, McCray et Nutbeam : « la conception étendue de la health literacy incluant les actions sociales, politiques et individuelles, il faut prendre en compte à la fois les compétences des individus et des communautés et les caractéristiques des institutions et des professionnels qui peuvent renforcer ou inhiber les actions des individus ou des communautés » . Nous proposons de traduire health literacy par « culture en santé » qui recouvre l'ensemble des connaissances et representations d'un individu en matière de santé, sa compréhension d'une question de santé, bien sûr, mais aussi ses peurs, ses espoirs, sa perception des autorités de santé et des interventions médicales et non médicales. Après la dichotomie qu'a connue le concept de health literacy , peut-être serait-il d'ailleurs utile aujourd'hui pour les tenants anglophones de son acception large de promouvoir l’étude d'une health culture , incluant la health literacy . La culture en santé fournit le cadre d'une approche de la communication basée sur les représentations de la population. Tournant le dos aux démarches verticales, elle oriente vers des stratégies de communication basées sur les faits, si nous pouvons nous permettre un détournement de l'expression « Evidence-Based Medicine », les faits étant ici la culture en santé de la population sur la question de santé concernée. Nous proposons succinctement quelques exemples de la mise en oeuvre de cette approche dans des programmes que nous avons menés en Afrique de l'Ouest. Des enquêtes anthropologiques nous ont fourni des photographies de la culture en santé sur la base desquelles nous avons proposé des stratégies et outils de communication. Nous étions chargés, en 2013, d’élaborer une stratégie et des outils de sensibilisation pour les volontaires de deux programmes en République démocratique du Congo (RDC), l'un destiné aux femmes enceintes, l'autre aux jeunes mères. Le travail d'une anthropologue, qui a conduit des focus groupes, nous a fourni des éléments sur les représentations des femmes et de leur entourage, maris, belles-soeurs et bellesmères 3 . L'une des principales recommandations à promouvoir auprès des femmes enceintes était de se rendre en consultation prénatale dès le début de la grossesse, or nous avons découvert que ce « début » n’était pas toujours bien défini, les femmes estimant souvent n’être enceintes que lorsqu'elles sentaient les mouvements du foetus. Ceci nous a amenés à inclure dans les documents d'aide de visite des dessins montrant que le foetus existe dès l'aménorrhée. Nous avons, de même, identifié une méconnaissance des risques spécifiques liés au paludisme chez la femme enceinte et du fait que la maladie est transmise par des moustiques nocturnes, ce qui limitait la portée de la recommandation de dormir sous une moustiquaire, à quoi s'ajoutait que l'odeur des moustiquaires, due à l'insecticide qui les imprègne, était considérée comme nocive. Nous avons fourni dans les outils de communication les explications nécessaires. L'enquête sur les perceptions de l'allaitement exclusif jusqu’à six mois nous a apporté des éléments qui expliquaient pourquoi les données de santé en RDC indiquaient que la grande majorité des mères connaissait cette recommandation, mais qu'une minorité seulement l'appliquait. Des femmes disaient que cette recommandation était très bien… mais qu'elles connaissaient leur bébé et que celui-ci avait besoin d'eau et de bouillie dès les premières semaines. Les principales raisons étaient qu'en Afrique un bébé a besoin de boire de l'eau comme un adulte car il fait chaud, que le lait ne contient pas toute la nourriture nécessaire, que quand on est mal nourrie on ne peut pas avoir un lait suffisamment nutritif, enfin qu'elles ne produisaient pas assez de lait. Notre réponse a été de ne pas utiliser d'injonctions ni dénoncer ce qui pourrait être mauvais pour un bébé, les femmes pensant savoir mieux que quiconque ce qui est bon pour leur enfant. Nous avons cherché à apporter aux femmes et à leur entourage des informations susceptibles d'encourager l'allaitement exclusif, par exemple en expliquant que le lait contient toute l'eau dont un bébé a besoin et tous les nutriments de l'alimentation d'un adulte, que le lait d'une femme dénutrie est quand même nutritif et qu'un bébé à qui on donne de l'eau ou de la bouillie tête moins, ce qui entraîne une baisse de la lactation. Nous avons été sollicités par le ministère de la Santé togolais, en 2015, pour proposer une stratégie et des outils de communication pour la préparation à l'arrivée éventuelle dans le pays de la maladie à virus Ébola qui touchait alors trois pays de la sous-région. Une enquête anthropologique a montré que les messages diffusés par le ministère de la Santé avaient très bien atteint la population, le numéro vert étant très largement connu, et que le message le plus retenu était qu'il fallait « éviter la viande de brousse », alors que la transmission épidémique est exclusivement interhumaine, et qu'il fallait se laver les mains et éviter les accolades et les serrements de main . Une idée répandue était qu’Ébola était un prétexte pour réinstaurer les mesures de protection de la faune sauvage appliquées par l'ancien dictateur Eyadema Gnassingbé. L'enquête a également mis en relief que si les mesures de distanciation et le lavage des mains avaient été respectés au début de la campagne de sensibilisation, ceci était de moins en moins le cas au fil du temps. Nous avons proposé une réorientation de la communication abandonnant la mention des animaux, informant sur les voies de transmission de la maladie (le contact avec les malades et les morts, et leurs fluides corporels), mais ne recommandant pas d'appliquer des gestes de prévention tant que la maladie n’était pas présente dans le pays. La maladie à virus Ébola ayant heureusement épargné le Togo, cette stratégie et les outils de communication vidéo et imprimés produits pour sa mise en oeuvre n'ont pas eu l'occasion d’être déployés. L'Onusida et le Fonds mondial contre le sida, le paludisme et la tuberculose (FM) nous ont demandé, en 2018, de proposer une stratégie de communication pour augmenter la demande de tests de dépistage du VIH, ce qui nous a amenés à travailler sur cette question en Côte d'Ivoire, avec le Programme national de lutte contre le sida. Une enquête anthropologique qualitative a mis en évidence que l'existence et la fonction des tests de dépistage étaient largement connues et que la perception des médicaments antirétroviraux contre le VIH était contradictoire : une majorité des enquêtés les connaissait et disait qu'ils permettent de vivre en bonne santé, mais le VIH était toujours fréquemment associé à la mort . De ce fait, l'annonce d'un diagnostic positif au VIH était souvent considérée davantage comme une condamnation à mort que comme la possibilité d'accéder à des traitements qui repousseraient cette mort. Nous avons préconisé une reprise de la communication de masse pour changer l'image de l'infection à VIH, communication abandonnée depuis une quinzaine d'années au profit de communications ciblées, et de séparer nettement « VIH » et « sida », notamment en bannissant l'expression « VIH/sida » car le premier n'entraîne pas nécessairement le second. Nous avons été intrigués par le fait que, selon notre enquête, les jeunes pensaient davantage que les anciens que le VIH équivaut à la mort, alors que le sida est au programme de plusieurs années des classes primaires et secondaires. Nous avons étudié les livres scolaires et découvert que la grande majorité d'entre eux présentaient l’évolution vers la mort des personnes infectées par le VIH comme inéluctable, mentionnant parfois les tests mais quasiment jamais les traitements . Ces idées fausses semblaient être compensées chez les adultes par leur expérience personnelle, le fait qu'ils connaissaient des personnes infectées sous traitement ou en avaient entendu parler. Le FM et le Programme national de lutte contre la tuberculose de Côte d'Ivoire nous ont demandé, en 2018, de formuler des propositions de communication pour améliorer le dépistage de la tuberculose, dont l'OMS estime que 50% des cas ne sont pas diagnostiqués en Afrique, 45% en Côte d'Ivoire . Une enquête préliminaire menée auprès d'une cinquantaine de soignants et membres d'ONG intervenant sur la tuberculose nous avait laissé penser que les principales explications du sous-diagnostic de la maladie, en plus de difficultés sociales et des insuffisances de l'offre de soins, étaient une méconnaissance de la tuberculose et une confiance dans la médecine traditionnelle censée détourner les malades tuberculeux des structures sanitaires. Or, une enquête socio-anthropologique quantitative nous a montré que la population avait une très bonne connaissance de l'existence et de la gravité de la tuberculose, que 28,25% seulement pensaient que les médicaments traditionnels sont efficaces contre la tuberculose, tandis qu’à la question « À quel type de médecine recourez-vous lorsqu'il s'agit de la tuberculose ? » 10,25% avaient répondu « la médecine traditionnelle » . En revanche, la moitié de la population seulement savait que le traitement de la tuberculose est gratuit. En outre, la tuberculose était très majoritairement associée au sida, ce qui engendrait une peur de la discrimination. Nos résultats ont ainsi brossé une « culture » de la tuberculose inattendue, sur la base de laquelle nous avons formulé des recommandations de communication. D'abord, contrer l'influence de la médecine traditionnelle ne devait pas être un axe de la sensibilisation au dépistage de la tuberculose, tandis qu'il fallait populariser la gratuité des traitements. Ensuite, il fallait dissocier VIH et tuberculose, souvent associés dans la communication sur le VIH. L'Organisation Ouest Africaine de la Santé (OOAS), agence santé de la Communauté économique des États de l'Afrique de l'Ouest (CEDEAO), nous a demandé en 2020 de mener une enquête socio-anthropologique dans 5 de ses États membres, pour étudier les représentations sur le Covid-19 et baser une stratégie et des outils de communication sur ces représentations . Les volets qualitatif et quantitatif de l’étude ont mis en évidence un sentiment très largement répandu de non-exposition au Covid-19 : croyance que si le Covid-19 existe sur d'autres continents il n'est pas présent dans le pays, ignorance ou sous-estimation des facteurs de risque que sont l’âge et l'obésité, ignorance de la transmission asymptomatique. Les théories complotistes et idées fausses qui circulent sur les réseaux sociaux ne semblaient, en revanche, pas jouer un rôle mesurable dans le non-respect des gestes barrières et le manque d'adhésion à la vaccination. Notre recommandation a été d’éviter les injonctions de respecter les gestes barrières et de se faire vacciner, pour concentrer les efforts de communication sur la transmission des informations et explications nécessaires sur la réalité de la pandémie dans les pays africains, les facteurs de risque et les voies de transmission. Les exemples de manque d'adhésion de la population à des mesures de santé publique sont nombreux, souvent lors de campagnes de vaccination mais aussi par exemple, spectaculairement, lors d’épidémies de maladie à virus Ébola. Des communications maladroites ont souvent contribué à nourrir les doutes, doutes qui se transforment en méfiance, puis en hostilité quand, au lieu de comprendre les raisons, fondées ou infondées, de ces doutes et d'y répondre, les responsables de la santé publique se contentent de marteler les mêmes messages . La prise en compte de la culture en santé de la population est indispensable pour concevoir une communication adaptée, limiter les risques d'incompréhension et de méfiance, faciliter l'adhésion . Cette culture en santé peut être évaluée par des études de sciences sociales, études qui doivent inclure les connaissances et représentations sur la question de santé concernée, mais également sur le système de santé et les promoteurs des mesures de santé publique. On n'a jamais autant consacré d'articles, de rapports, de réunions et de webinaires à la circulation d'idées fausses sur une question de santé que depuis le début de la pandémie de Covid-19, plus précisément depuis que le Directeur général de l'OMS, Tedros Adhanom Ghebreyesus, a déclaré le 15 février 2020 : « Nous ne combattons pas seulement une épidémie, nous luttons aussi contre une infodémie. Les informations fausses se propagent plus vite et plus facilement que ce virus, et elles sont tout aussi dangereuses ». Que les informations fausses se propagent aujourd'hui rapidement sur internet est incontestable, mais c'est aussi le cas des informations exactes qui s'affichent désormais sur les smartphones de personnes autrefois coupées de toute information médicale. Un des enjeux de santé publique, celui de la communication sur le Covid-19, est de mesurer leur dangerosité réelle en étudiant leur impact sur la culture en santé, la culture du Covid-19 . La communication en santé a pour but un changement de comportement : port du préservatif, lavage des mains, utilisation d'une moustiquaire, hygiène de vie, gestes barrières, vaccination… Ces comportements ne peuvent être durablement obtenus par l'injonction, l'inspiration de la crainte ou la culpabilisation. Des auteurs se demandaient déjà en 2007 : « Pourquoi les plans de communication en santé se ressemblentils tellement, alors que les populations, les problèmes, la culture et le vécu concret sont si différents ? » . Pour être efficace, la communication en santé doit se fonder sur la culture en santé de la population pour déterminer une stratégie et concevoir des messages et des outils de communication qui gagneront l'adhésion aux changements de comportement désirés. L'impact de ces messages sera ensuite mesuré et ils seront modifiés si les représentations de la population évoluent. Une démarche de culture en santé est ainsi une démarche circulaire, prenant sa source dans la population, passant par les spécialistes, puis revenant à la population, par opposition à la démarche verticale de spécialistes qui conçoivent en conclave les messages qu'ils estiment les meilleurs. Peut-être n'est-elle, finalement, que l'extension au niveau de la communication de l'approche que bien des personnels soignants, des services de santé ou des ONG suivent quotidiennement auprès de leurs patients ou des communautés dans lesquelles ils interviennent. Élever la culture en santé de la population, c'est aussi lui donner les moyens de sa responsabilisation , promouvoir une communication en santé de type démocratique, une communication qui ne soit pas fondée sur des injonctions d'en haut, des intimidations, des discours martiaux culpabilisants, mais qui sollicite l'esprit civique, la compassion, le sens des responsabilités et la fraternité . L'auteur est consultant en communication santé. Les travaux mentionnés ont été financés par l'Onusida, le Fonds mondial, COOPI, l'Agence Française de Développement, SIS International et le gouvernement de la République togolaise. |
Allergic rhinitis management: a Delphi Consensus promoted by the Italian Society of Pediatric Allergy and Immunology (SIAIP) | d86fbebd-91f0-4f3d-af4b-c4f32ff9e57f | 11603965 | Pediatrics[mh] | A previous Italian survey investigated the features of allergic rhinitis (AR) in children and the prevalence of phenotypes proposed by the Allergic Rhinitis and its Impact on Asthma (ARIA) guidelines . This survey involved 35 pediatric allergy centers throughout Italy and included data from 2,623 patients. The results confirmed the adequacy of ARIA classification and treatment failure in patients with severe AR . Successively, the Italian Society of Pediatric Allergology and Immunology (SIAIP) promoted a further survey to update the knowledge on AR in children and adolescents (manuscript submitted). In particular, this survey has directly involved more than 800 primary care pediatricians, thus reflecting the real-world management of AR in children and adolescents. The findings showed that most Italian primary care pediatricians adopted ARIA guidelines, most children complained of moderate-severe symptoms, asthma was common comorbidity, intranasal corticosteroids and oral antihistamines were first-level choices, and intranasal antihistamine plus corticosteroid was a frequent therapeutic option, mainly in subjects with moderate-severe symptoms. Therefore, these two surveys underscored the importance of obtaining updated and accurate information concerning the practical management of allergic rhinitis in Italian children. Presently, there are some international guidelines concerning the AR management . Despite, this abundance of documents, there are no pediatric-oriented guidelines nor documents specific for the Italian pediatric setting. As a result, the SIAIP performed a Delphi Consensus on the practical management of children with AR. This iterative initiative involved outstanding experts on this topic who discussed and approved a list of statements to administer a group of Italian pediatricians with proven experience in AR management. Namely, the Delphi method was an indirect, anonymous, and iterative way to obtain a consensus . Delphi method A group of five experts (the authors of this paper) on AR management constituted a steering committee devoted to produce the present Delphi Consensus. This steering committee drafted and shared a questionnaire (first round) to administer to a group of pediatricians who had to express their agreement grade on the statements (second round). The components of the steering committee have a proven experience on AR management documented by more than 30 years of clinical practice on allergic diseases and scientific value demonstrated by more than 20 publications on this topic produced in the last five years. The steering committee formulated the statements considering the current scientific literature on AR management and personal expertise. The group of involved pediatricians was selected based on clinical practice in third-level teaching hospitals and scientific merit documented by at least five publications on this topic produced in the last five years. In addition, all participants are Fellows of the SIAIP and work in all regions of Italy, so that the experts’ panel reflects geographic diversity across Italy. The first round consisted of a face-to-face interaction to discuss the initial draft of questions and approve them. The second round consisted of the creation of a specific online platform to collect the vote of participants about the grade of agreement and to assure the anonymity of each participant. The Delphi Consensus comprised questions concerning the definition of AR and type 2 inflammation, epidemiology, comorbidity, symptoms characteristics, and medications (use and schedules). The Table reports in detail all questions. After collecting and analyzing the second round’s results, the steering committee discussed and approved them. The Delphi Consensus process was conducted in June 2024. Delphi statements The Delphi document comprised questions concerning the definition of AR and type 2 inflammation, epidemiology, comorbidity, symptoms characteristics, and medications (use and schedules). The Table reports in detail all questions. Delphi assessment The Delphi Consensus Panel was requested to rate their agreement with each questionnaire statement using a 5-point Likert scale, such as 1 (strongly disagreement), 2 (disagreement), 3 (partially agreement), 4 (agreement), and 5 (strongly agreement). Each expert provided individual and anonymous vote on the statements, considering routine practice and clinical evidence. The number and percentage of participants scoring each item was calculated. The scientific committee then discussed the results in a virtual meeting. For each questionnaire statement, the consensus was considered to have been achieved based on the agreement (sum of score 4–5) of at least 80% of the Consensus Panel and the successive acceptance of the steering committee. The statistical analysis was descriptive and a mean score of the sum of 4 + 5 scores was calculated also considering the standard deviation. A group of five experts (the authors of this paper) on AR management constituted a steering committee devoted to produce the present Delphi Consensus. This steering committee drafted and shared a questionnaire (first round) to administer to a group of pediatricians who had to express their agreement grade on the statements (second round). The components of the steering committee have a proven experience on AR management documented by more than 30 years of clinical practice on allergic diseases and scientific value demonstrated by more than 20 publications on this topic produced in the last five years. The steering committee formulated the statements considering the current scientific literature on AR management and personal expertise. The group of involved pediatricians was selected based on clinical practice in third-level teaching hospitals and scientific merit documented by at least five publications on this topic produced in the last five years. In addition, all participants are Fellows of the SIAIP and work in all regions of Italy, so that the experts’ panel reflects geographic diversity across Italy. The first round consisted of a face-to-face interaction to discuss the initial draft of questions and approve them. The second round consisted of the creation of a specific online platform to collect the vote of participants about the grade of agreement and to assure the anonymity of each participant. The Delphi Consensus comprised questions concerning the definition of AR and type 2 inflammation, epidemiology, comorbidity, symptoms characteristics, and medications (use and schedules). The Table reports in detail all questions. After collecting and analyzing the second round’s results, the steering committee discussed and approved them. The Delphi Consensus process was conducted in June 2024. The Delphi document comprised questions concerning the definition of AR and type 2 inflammation, epidemiology, comorbidity, symptoms characteristics, and medications (use and schedules). The Table reports in detail all questions. The Delphi Consensus Panel was requested to rate their agreement with each questionnaire statement using a 5-point Likert scale, such as 1 (strongly disagreement), 2 (disagreement), 3 (partially agreement), 4 (agreement), and 5 (strongly agreement). Each expert provided individual and anonymous vote on the statements, considering routine practice and clinical evidence. The number and percentage of participants scoring each item was calculated. The scientific committee then discussed the results in a virtual meeting. For each questionnaire statement, the consensus was considered to have been achieved based on the agreement (sum of score 4–5) of at least 80% of the Consensus Panel and the successive acceptance of the steering committee. The statistical analysis was descriptive and a mean score of the sum of 4 + 5 scores was calculated also considering the standard deviation. The first round served to define a list of statements to administer to the panel of experts designed by the steering committee. This round included five independent experts who constituted the steering committee. After thorough debate, the agreement among these steering committee members was entire, i.e., a 100% complete agreement (score 5) was reached for all 22 statements. The second round included 42 other experts, identified by the steering committee, who voted on the 22 statements. The voting results are reported in Figs. and , and . Seven statements (6, 8, 9, 12, 13, 15, and 22) obtained a full agreement level, such as 100%. Nine statements (1, 1, 2, 3, 4, 5, 7, 10, 14, and 16) obtained an agreement level between 90 and 99%. The remaining six statements reached an agreement level between 80 and 89%. Consequently, all statements reached, as a priori defined, a positive consensus, such as > 80%. The present Delphi Consensus globally involved 47 Italian experts on AR management in the pediatric setting. Therefore, the present Delphi Consensus reflected how pediatric AR is managed in Italy’s real-world practice. The profile of participants also guaranteed an adequate standard of outstanding scientific profile. There is good agreement (> 90%) among participants on the concepts that type 2 inflammation signs AR and leads to eosinophilic infiltration of the nasal mucosa. Namely, there is a body of evidence sustaining this concept . In addition, large majority of participants believe that allergic inflammation depends on causal allergen exposure even without symptom occurrence, i.e., the concept of minimal persistent inflammation . Most participants agreed about the increasing prevalence of AR as recently demonstrated by a meta-analysis . Almost all participants (97%) shared the concept that AR should be not considered a trivial disease, as it is accompanied by asthenia, irritability, depression of mood, anxiety, poor concentration and sleep disturbances, all annoying symptoms that cause a significant negative impact on quality of life. The document ARIA and robust evidence confirmed these AR characteristics . There was also full agreement (100%) about the notion that AR frequently presents comorbidity . Namely, AR is often associated with other conditions such as atopic dermatitis, allergic conjunctivitis, rhinosinusitis, bronchial asthma, eosinophilic esophagitis, food allergy and sleep disorders . In addition, in pediatric age, AR can cause altered development of the craniofacial massif and normal development of the dental arch . A near full agreement (97.2%) concerned the idea that AR is the main risk factor for the onset of bronchial asthma and, if already present, the main risk factor for poor asthma symptom control. In this regard, there is large evidence supporting this statement and is widely shared . As a result, all participants agreed about the need of thorough diagnostic pathways to early detect asthma comorbidity. There is evidence that adequately treating AR significantly affect asthma course . There was also full consensus about the pathophysiological characteristics of AR symptoms. Nasal itching, sneezing (blanks), and watery rhinorrhoea depend mainly on the abundant release of histamine during the allergic reaction (histamine-dependent symptoms), whereas nasal obstruction is mainly an expression of allergic inflammation . Instead, nasal obstruction is mainly an expression of allergic inflammation and intranasal corticosteroids efficaciously dampen type 2 inflammatory events . However, an agreement (80.5%) was reached concerning the use of topical antihistamines, probably regarding the possible relief of ocular symptoms. Namely, there is a large pier of studies in fact that have shown that intranasal antihistamines allow significant dose reduction and are more effective than the systemic formulation . In addition, there is also evidence about their efficacy in alleviating ocular symptoms as recently documented by a meta-analysis . There was a full agreement (100) concerning the efficacy and safety of topical corticosteroids in treating patients with AR. There was shared awareness that effectively reduce the degree of type 2 inflammation and consequently relieve nasal obstruction and can also act on comorbidities such as rhinosinusitis or eye symptoms or asthma . Consistently, all participants agreed on the fact that topical corticosteroids must be administered appropriately, considering the symptomatology and mode of application . Almost full consensus (97.2%) regarded the statement declaring that a fixed combination antihistamine/corticosteroid (azelastine/fluticasone) has high efficacy, rapid action and safety even in pediatric age . Similarly, there was full agreement on the concept that azelastine/fluticasone combination acts with a dual effect on both the histamine response and inflammation with greater speed and efficacy than the non-combined administration of the two drugs on all symptoms of allergic rhinitis, as well documented in literature . There was also high grade of consensus (91.7%) about the concept that the combination of azelastine/fluticasone should be considered in children/adolescents when maximum results are to be achieved in a short time. Namely, this fixed combination provides a quick symptomatic activity . Most participants (88.9%) agreed that using the azelastine/fluticasone combination is indicated for appropriate periods of time (at least one to two weeks) to ensure prompt resolution of symptoms and adequate control of type 2 inflammation. This statement reflects the need of assuring a dampening of type 2 inflammation that usually requires one-two weeks . There was also consensus (81%) about the combination azelastine/fluticasone can also be used in symptomatic mode in the case of sporadic but nevertheless intense rhinitis episodes. In this case, some participants preferred to prioritize inflammation control activities over merely symptomatic ones. Moreover, there was an agreement about the notion that the combination of azelastine/fluticasone could result in a saving of inhaled corticosteroids when using topical corticosteroids for asthma therapy. Probably, some participants were doubtful that properly treating allergic rhinitis can also positively influence the anti-inflammatory treatment of asthma. In fact, there is a large body of evidence that instead shows how important it is to treat allergic rhinitis well to ensure adequate asthma control . Consistently, some panelists expressed low agreement about the combination azelastine/fluticasone can lead to savings in the use of oral antihistamines with lower economic costs and greater adherence to treatment, which is particularly relevant in adolescence. Actually, there is documentation that azelastine/fluticasone improves the AR management . Also concerning the rapidity of action and consequently the preference for azelastine/fluticasone there was a wide agreement (86.1%). There is evidence that this combination is quicker than antihistamines alone in relieving complaints . The last statement gathered full approval as to take the time to explain well to children/adolescents and their families what allergic rhinitis is, its causes and the use of the most appropriate medication, in order to achieve maximum involvement (patient engagement) in the proper management of the disease is crucial. The present document had some limitations, including the collection of personal opinions, the lack of objective measures, and mostly the absence of clinical data. Moreover, the statements concerned only some aspects of AR management. However, this consensus involved outstanding pediatricians managing many children with AR with large experience. Thus, the results provided robust outcomes that also reflected what happens in the real world. Further studies should confirm these findings, adopting adequate methodology. In the future, this initiative could involve a wider audience of pediatricians involved daily in their clinical practice in the management of children and adolescents with AR. Moreover, the SIAIP is currently engaged and will be even more so in the future in initiatives aimed at updating knowledge on the topic through various educational initiatives (distance learning, meetings, courses, and congresses). The primary outcome should be to achieve a large application of these recommendations in clinical practice. In conclusion, the present Delphi Consensus reported that a panel of Italian expert pediatricians considered the type 2 inflammation the leading characteristic of allergic rhinitis, so deserving adequate treatment. Contextually, this documented endorsed the concept that a rapid symptom relief represents a priority objective in managing children and adolescents with allergic rhinitis. In addition, safety should be always evaluated prescribing any therapy. In this context, the present Delphi Consensus underlined the experts’ opinion that the fixed combination of intranasal corticosteroid plus antihistamine (i.e., azelastine/fluticasone) may represent a valuable option for treating young people with allergic rhinitis. This issue reflects what the most recent guidelines advocate on AR management. |
Secretomes of Gingival Fibroblasts From Periodontally Diseased Tissues: A Proteomic Analysis | cced3d5e-f1ee-4bc4-8f4e-0c70012e3abc | 11847645 | Biochemistry[mh] | Introduction Human gingival fibroblasts (GF) are a crucial component of the gingival stroma and have a significant role in the maintenance of periodontal tissue architecture and homeostasis (Wielento et al. ). Fibroblasts primarily synthesize the extracellular matrix (ECM), which consists of a variety of macromolecules whose assembly, architecture, and biomechanical properties vary within and between tissues (Plikus et al. ; McCulloch and Bordin ). Moreover, fibroblasts communicate with each other and with other cells by secreting growth factors and cytokines or by developing cell–cell contacts, creating cellular communication networks (Plikus et al. ). While the primary function of GF is the maintenance of the homeostasis of the gingival tissues by regulating the production of ECM, these cells also play important roles in various physiological (wound healing) and pathological situations (inflammation) due to their high regeneration potential (Wielento et al. ; Häkkinen et al. ). In periodontitis, periodontal tissue destruction occurs due to the combination of a dysbiotic subgingival biofilm and a non‐resolving host‐response, which results in changes in the ECM and surrounding microenvironment. This dysbiosis, in turn, activates fibroblasts through migration, proliferation, and contraction to restore homeostasis in the damaged tissue (Wielento et al. ). This ability to respond to chemical and physical signals from the ECM, allows the GF to adopt a secretory and migratory phenotype resulting in “scarless” tissue healing, with a regenerative potential similar to adult mesenchymal stem cells (MSC), as demonstrated both in vitro and in vivo (Häkkinen et al. ; Kim et al. ). Thus, gingiva represents a minimally invasive source of cells for therapeutic applications. Of particular interest in the context of current regenerative therapies is the paracrine activity of cells via their secretomes (Wang, Chen, et al. ). According to the Medical Subject Headings (MeSH) secretomes are defined as “the set of all the soluble factors and extracellular vesicles secreted into the extracellular space by cells.” Secretion of such components, including soluble proteins (growth factors, cytokines, chemokines), lipids, nucleic acids and extracellular vesicles (EV), serves fundamental functions in both the autocrine and paracrine cell signaling pathways for regulating the local microenvironment. Therefore, the use of cell secretomes, in the form of cell conditioned media (CM), may offer a promising alternative to currently complex and cost‐intensive cell therapies for advanced tissue regeneration. Practical benefits include relative ease of preparation, “off‐the‐shelf” application, and better cost‐efficacy (Marolt Presen et al. ). In the context of periodontal/bone regeneration, previous data suggest that CM may be at least equally, if not more, effective than cell transplantation (Hiraki et al. ; Osugi et al. ; Sanchooli et al. ; Shanbhag et al. ). Currently, various tissue/cell sources are being investigated to produce secretomes for therapeutic applications. Clinical grade secretome production requires ex vivo cell culture and expansion, ideally under Good Manufacturing Practice (GMP) conditions. Gingival tissues can be easily obtained during periodontal surgical procedures and are often discarded as clinical waste during resective surgery, which further enhances the prospects of gingiva as a source of therapeutic cells (McCulloch and Bordin ). Previous studies have compared various properties of GF (Baek et al. ; Li et al. ; Kang et al. ; Yang et al. ; Bartold and Page ; Kanda‐Nakamura et al. ; Bao et al. ; Bartold and Page ), or gingival progenitor cells (Makkad ; Bekić et al. ; Ge et al. ), isolated from healthy versus periodontitis‐affected tissues. Moreover, proteomic analyses of in situ or in vitro cultured GF from healthy (Bao et al. ; McKnight et al. ; Onyedibe et al. ) and diseased sites (Bao et al. ) have been reported. Recent data also indicates that the CM of GF has anti‐inflammatory and pro‐wound healing effects based on in vitro and in vivo assays (Ahangar et al. ). However, proteomic analyses of the secretomes of GF from periodontitis‐affected tissues (GF‐perio) are lacking. Given the important roles of GF in homeostasis, ECM synthesis/remodeling, immune modulation, and wound healing during both periodontal health and disease, it is reasonable to expect that the proteomic mediators of these functions would be reflected (at least to some extent) in their secretomes. Therefore, the objective of the present study was to investigate the composition of the secretomes of GF‐perio using proteomics. Materials and Methods 2.1 Cell Culture The use of human cells and tissues was approved by the Regional Committees for Medical Research Ethics in Norway (2011/1516/REK, 2016/1267/REK‐nord). GF‐perio were isolated from biopsies obtained following informed donor consent, as previously described (Shanbhag et al. ). Briefly, gingival connective tissues were harvested from systemically healthy nonsmoking Stage III or IV periodontitis patients ( n = 6; 46–72 years) undergoing access flap surgery at the Department of Clinical Dentistry, University of Bergen, Bergen, Norway. All patients fulfilled the criteria for surgical intervention, that is, not fulfilling the therapeutic endpoints after subgingival instrumentation and adequate plaque control (periodontal therapy steps 1 and 2), as reflected by probing pocket depth (PPD) > 5 mm and bleeding on probing (BoP), that is, persistent inflammation , despite previous nonsurgical therapy and adequate plaque control (Sanz et al. ). One connective tissue biopsy per patient was harvested from the interdental aspect of a full‐thickness mucoperiosteal flap around maxillary or mandibular first or second molars. The tissue biopsy was placed in a tube containing phosphate‐buffered saline (PBS; Invitrogen, Waltham, MA, USA) and immediately transferred to the laboratory for processing. After thorough washing (3×) with PBS, primary explant cultures of GF‐perio were established in Dulbecco's Modified Eagle's medium (DMEM, Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (GE Healthcare, South Logan, UT, USA) and 1% antibiotics (penicillin/streptomycin; GE Healthcare). Cells were sub‐cultured and expanded in humidified 5% CO 2 at 37°C. Passage‐2 or ‐3 cells from each of the donors were used to prepare CM (Shanbhag et al. ) (Figure ). As a reference group for the proteomic analysis, CM was prepared from GF (passage‐2 or ‐3) of periodontally healthy subjects (n = 6; GF‐healthy) obtained from a biobank at the Department of Clinical Dentistry, University of Bergen. These cells were obtained during routine dental procedures from clinically healthy and non‐inflamed gingival tissue sites and cryopreserved for general research use (not specifically for the present study). 2.2 Preparation of CM CM from GF‐perio and GF‐healthy were prepared using a standardized protocol (Shanbhag et al. ). Briefly, passage‐2 or ‐3 cells from each donor were separately cultured in growth media until 80% confluency at which point the media was removed and, following 3× washes with PBS (Invitrogen, Waltham, MA, USA), replaced with serum‐ and antibiotic‐free DMEM. After 48 h, the supernatant media (CM) were collected, centrifuged (4000 g , 10 min), aliquoted, and stored at −80°C until further use. Before experiments, the CM were concentrated (~30 fold) using Amicon Ultra‐15 3 kDa centrifugal filter devices (Merck Millipore, Billerica, MA, USA) following the manufacturer's protocol. Briefly, after PBS equilibration, 15 mL of each CM was centrifuged in the Ultra‐15 tubes at 4000 g for 30 min at 4°C, followed by buffer exchange with PBS and re‐centrifugation at 4000 g for 30 min at 4°C. The corresponding concentrated CM from the individual donors were used for proteomic analysis. 2.3 Liquid Chromatography With Tandem Mass Spectrometry (LC‐MS/MS) CM from GF‐perio and GF‐healthy were analyzed using LC‐MS/MS via label‐free quantitation, as previously described (Aasebø et al. ). Briefly, total protein concentration of each sample was measured using bioinchonic acid assay (Pierce BCA Kit, Thermo Fisher, Waltham, MA, USA) and 10 μg protein was processed to obtain tryptic peptides. About 0.5 μg protein, as tryptic peptides dissolved in 2% acetonitrile and 0.5% formic acid, was injected into an Ultimate 3000 RSLC system connected online to an Exploris 480 mass spectrometer equipped with EASY‐spray nano‐electrospray ion source (all from Thermo Scientific, Sunnyvale, CA, USA). Additional details of LC‐MS/MS are reported in the . The LC‐MS/MS data have been deposited to the Proteome‐Xchange Consortium via the PRIDE partner repository ( https://www.ebi.ac.uk/pride/ ; accessed on 07.08.2024) with the data set identifier PXD054664. 2.4 Bioinformatic Analysis The LC‐MS/MS raw files were searched using Proteome Discoverer software (version 2.5.0.400; Thermo Scientific) against the human database. Data were analyzed using Perseus software (version 2.0.9.0) (Tyanova et al. ). To ensure accurate quantification of proteins, a filtration strategy based on detection of proteins in at least five (of six) donors in each CM group (GF‐perio and GF‐healthy) was applied. First, common (proteins identified in both groups) and exclusive proteins (proteins identified in only the perio or healthy group) were identified. Next, quantitative analysis of the common proteins was performed to identify the differentially expressed proteins (DEPs) using a two‐sided Student's t ‐test [−log10( p ‐value) and p < 0.05] in combination with a permutation‐based correction for multiple hypothesis testing (false discovery rate; FDR = 0.05). Functional profiling of common proteins between the groups and exclusive proteins in each group was performed using the g: Profiler software (version e111_eg58_p18_30541362) (Kolberg et al. ) based on the gene ontology (GO) categories of molecular function (MF), biological process (BP), and cellular component (CC) databases. Additional details of the bioinformatic analysis are reported in the . Functional enrichment analysis (FEA) was performed to classify individual proteins into similar functional categories (MP, BP, CC) using the open‐access functional enrichment analysis tool – FunRich, as previously described (Pathan et al. ). 2.5 Multiplex Immunoassay For validation of LC‐MS/MS data, the Quantibody Human Bone Metabolism Array Q1 (RayBiotech Inc., Norcross, GA, USA) with 31 bone‐related cytokines Supporting information: Table ) was used, as previously described (Shanbhag et al. ). This array is based on the sandwich enzyme‐linked immunosorbent assay (ELISA) technology, which allows simultaneous quantitative measurement of multiple proteins in a sample. Briefly, following the manufacturer's protocol, array hybridization was performed using test samples and standard cytokines on a custom microarray slide (RayBiotech Inc.), where each antibody is spotted in quadruplicate. Array scanning was performed using a laser scanner (GenePix 4000B) and proprietary software (both from Axon Instruments Burladingen, Germany) at different photomultiplier tube gains; the most suitable scan was selected for normalization. Cytokine concentrations were calculated based on linear standard curves and normalized to the corresponding total protein levels (pg/μg total protein); data are presented as fold changes in GF‐perio CM relative to the reference group. 2.6 Statistical Analysis Identification of DEPs was performed using a two‐sided Student's t ‐test with Fisher's correction in Perseus software. FEA was performed using the FunRich open access tool, which applies the hypergeometric test with Bonferroni correction for p ‐values. All other statistical analyses were performed using the Prism 9 software (GraphPad Software, San Diego, CA, USA). Linear data are presented as means (±SD), unless specified. Normality testing was performed using Shapiro–Wilk test and independent sample t‐ tests with a 0.05 significance level were applied. Cell Culture The use of human cells and tissues was approved by the Regional Committees for Medical Research Ethics in Norway (2011/1516/REK, 2016/1267/REK‐nord). GF‐perio were isolated from biopsies obtained following informed donor consent, as previously described (Shanbhag et al. ). Briefly, gingival connective tissues were harvested from systemically healthy nonsmoking Stage III or IV periodontitis patients ( n = 6; 46–72 years) undergoing access flap surgery at the Department of Clinical Dentistry, University of Bergen, Bergen, Norway. All patients fulfilled the criteria for surgical intervention, that is, not fulfilling the therapeutic endpoints after subgingival instrumentation and adequate plaque control (periodontal therapy steps 1 and 2), as reflected by probing pocket depth (PPD) > 5 mm and bleeding on probing (BoP), that is, persistent inflammation , despite previous nonsurgical therapy and adequate plaque control (Sanz et al. ). One connective tissue biopsy per patient was harvested from the interdental aspect of a full‐thickness mucoperiosteal flap around maxillary or mandibular first or second molars. The tissue biopsy was placed in a tube containing phosphate‐buffered saline (PBS; Invitrogen, Waltham, MA, USA) and immediately transferred to the laboratory for processing. After thorough washing (3×) with PBS, primary explant cultures of GF‐perio were established in Dulbecco's Modified Eagle's medium (DMEM, Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (GE Healthcare, South Logan, UT, USA) and 1% antibiotics (penicillin/streptomycin; GE Healthcare). Cells were sub‐cultured and expanded in humidified 5% CO 2 at 37°C. Passage‐2 or ‐3 cells from each of the donors were used to prepare CM (Shanbhag et al. ) (Figure ). As a reference group for the proteomic analysis, CM was prepared from GF (passage‐2 or ‐3) of periodontally healthy subjects (n = 6; GF‐healthy) obtained from a biobank at the Department of Clinical Dentistry, University of Bergen. These cells were obtained during routine dental procedures from clinically healthy and non‐inflamed gingival tissue sites and cryopreserved for general research use (not specifically for the present study). Preparation of CM CM from GF‐perio and GF‐healthy were prepared using a standardized protocol (Shanbhag et al. ). Briefly, passage‐2 or ‐3 cells from each donor were separately cultured in growth media until 80% confluency at which point the media was removed and, following 3× washes with PBS (Invitrogen, Waltham, MA, USA), replaced with serum‐ and antibiotic‐free DMEM. After 48 h, the supernatant media (CM) were collected, centrifuged (4000 g , 10 min), aliquoted, and stored at −80°C until further use. Before experiments, the CM were concentrated (~30 fold) using Amicon Ultra‐15 3 kDa centrifugal filter devices (Merck Millipore, Billerica, MA, USA) following the manufacturer's protocol. Briefly, after PBS equilibration, 15 mL of each CM was centrifuged in the Ultra‐15 tubes at 4000 g for 30 min at 4°C, followed by buffer exchange with PBS and re‐centrifugation at 4000 g for 30 min at 4°C. The corresponding concentrated CM from the individual donors were used for proteomic analysis. Liquid Chromatography With Tandem Mass Spectrometry (LC‐MS/MS) CM from GF‐perio and GF‐healthy were analyzed using LC‐MS/MS via label‐free quantitation, as previously described (Aasebø et al. ). Briefly, total protein concentration of each sample was measured using bioinchonic acid assay (Pierce BCA Kit, Thermo Fisher, Waltham, MA, USA) and 10 μg protein was processed to obtain tryptic peptides. About 0.5 μg protein, as tryptic peptides dissolved in 2% acetonitrile and 0.5% formic acid, was injected into an Ultimate 3000 RSLC system connected online to an Exploris 480 mass spectrometer equipped with EASY‐spray nano‐electrospray ion source (all from Thermo Scientific, Sunnyvale, CA, USA). Additional details of LC‐MS/MS are reported in the . The LC‐MS/MS data have been deposited to the Proteome‐Xchange Consortium via the PRIDE partner repository ( https://www.ebi.ac.uk/pride/ ; accessed on 07.08.2024) with the data set identifier PXD054664. Bioinformatic Analysis The LC‐MS/MS raw files were searched using Proteome Discoverer software (version 2.5.0.400; Thermo Scientific) against the human database. Data were analyzed using Perseus software (version 2.0.9.0) (Tyanova et al. ). To ensure accurate quantification of proteins, a filtration strategy based on detection of proteins in at least five (of six) donors in each CM group (GF‐perio and GF‐healthy) was applied. First, common (proteins identified in both groups) and exclusive proteins (proteins identified in only the perio or healthy group) were identified. Next, quantitative analysis of the common proteins was performed to identify the differentially expressed proteins (DEPs) using a two‐sided Student's t ‐test [−log10( p ‐value) and p < 0.05] in combination with a permutation‐based correction for multiple hypothesis testing (false discovery rate; FDR = 0.05). Functional profiling of common proteins between the groups and exclusive proteins in each group was performed using the g: Profiler software (version e111_eg58_p18_30541362) (Kolberg et al. ) based on the gene ontology (GO) categories of molecular function (MF), biological process (BP), and cellular component (CC) databases. Additional details of the bioinformatic analysis are reported in the . Functional enrichment analysis (FEA) was performed to classify individual proteins into similar functional categories (MP, BP, CC) using the open‐access functional enrichment analysis tool – FunRich, as previously described (Pathan et al. ). Multiplex Immunoassay For validation of LC‐MS/MS data, the Quantibody Human Bone Metabolism Array Q1 (RayBiotech Inc., Norcross, GA, USA) with 31 bone‐related cytokines Supporting information: Table ) was used, as previously described (Shanbhag et al. ). This array is based on the sandwich enzyme‐linked immunosorbent assay (ELISA) technology, which allows simultaneous quantitative measurement of multiple proteins in a sample. Briefly, following the manufacturer's protocol, array hybridization was performed using test samples and standard cytokines on a custom microarray slide (RayBiotech Inc.), where each antibody is spotted in quadruplicate. Array scanning was performed using a laser scanner (GenePix 4000B) and proprietary software (both from Axon Instruments Burladingen, Germany) at different photomultiplier tube gains; the most suitable scan was selected for normalization. Cytokine concentrations were calculated based on linear standard curves and normalized to the corresponding total protein levels (pg/μg total protein); data are presented as fold changes in GF‐perio CM relative to the reference group. Statistical Analysis Identification of DEPs was performed using a two‐sided Student's t ‐test with Fisher's correction in Perseus software. FEA was performed using the FunRich open access tool, which applies the hypergeometric test with Bonferroni correction for p ‐values. All other statistical analyses were performed using the Prism 9 software (GraphPad Software, San Diego, CA, USA). Linear data are presented as means (±SD), unless specified. Normality testing was performed using Shapiro–Wilk test and independent sample t‐ tests with a 0.05 significance level were applied. Results 3.1 Proteomic Profile of GF‐Perio Secretomes LC‐MS/MS revealed a total of 1833 proteins in GF‐perio CM, slightly more than in the healthy reference group ( n = 1817). Protein intensity correlation analysis showed a strong correlation between GF‐perio donors with an average Pearson R value of 0.93 (range 0.90–0.95). Compared to the healthy reference group, GF‐perio CM revealed: a. More exclusive proteins ( n = 127 vs. 111), that is, proteins detected only in GF‐perio CM, and b. More DEPs ( n = 73 vs. 0), that is, proteins with significantly greater abundance (Figure , Table ). 3.2 Functional Analysis of Proteins in GF‐Perio Secretomes GO profiling of the total proteins in GF‐perio CM revealed enrichment of specific categories according to CC, MF, and BP. Among the top 10 enriched categories certain CC (“extracellular matrix”), MF (“protein binding”), and BP terms (“adaptive immune response”) were directly related to wound healing (Table ). Next, FEA was performed to determine specifically enriched categories of proteins among the DEPs (relative to the reference group) and exclusive proteins (independent of the reference group) in GF‐perio CM. Among the top 10 enriched categories in DEPs, the CC categories of “exosomes” and “cytoplasm” were statistically significant ( p < 0.05 according to multiple testing) (Table ). Similarly, among the exclusive proteins, the CC categories of “exosomes,” “lysosomes” and “cytoplasm,” and the MF category of “chaperone activity,” were significantly enriched ( p < 0.05 according to multiple testing) in GF‐perio CM (Table ). 3.3 Proteins in GF‐Perio Secretomes Related to Wound Healing and Bone Regeneration Several key mediators of wound healing were detected in GF‐perio CM, such as growth factors [transforming growth factor beta (TGFβ), bone morphogenic protein (BMP), vascular endothelial growth factor (VEGF), insulin‐like growth factor (IGF), and platelet‐derived growth factor (PDGF) family proteins], cell adhesion molecules [cadherins (CDH2, CDH6, CDH11, CDH13), tetranectin (CLEC3B), fibronectin (FN1), integrins, etc.], and ECM proteins [collagens (COL), collagenases/metalloproteinases (MMP), tissue inhibitors of metalloproteinases (TIMP), etc.] (Tables and ). Moreover, among the top 10 DEPs in GF‐perio were important cell adhesion molecules, such as cadherin‐5/VE‐cadherin (CDH5), tetranectin (CLEC3B), and ECM proteins, such as SPARC‐like protein‐1 (SPARCL1) (Table ). While a majority of the relevant wound healing‐related proteins were identified in both GF‐perio and GF‐healthy CM, some differences were observed between the groups among the exclusive proteins (Table ). For example, certain growth factors [e.g., platelet‐derived growth factor C (PDGFC), neuroligin‐2 (NLGN2)] were only detected in GF‐perio CM, while others [platelet‐derived growth factor receptor‐like protein (PDGFRL), fibroblast growth factor‐7 (FGF7), hepatoma‐derived growth factor‐related protein‐3 (HDGFL3), vascular endothelial growth factor receptor‐2 (KDR)] were only detected in the healthy reference group. To validate the LC‐MS data, a multiplex immunoassay of bone‐related cytokines was used. Six of the included cytokines revealed consistent values for all tested samples; these included proteins related to the inflammation [interleukins (IL6, IL8)], cell proliferation [fibroblast growth factor‐2 (FGF2), monocyte chemoattractant protein‐1 (MCP1)/CC motif chemokine ligand‐2 (CCL2), and ECM components [matrix metallopeptidase‐2 (MMP2), osteoactivin/transmembrane glycoprotein NMB (GPNMB)]. Compared to the healthy reference group, GF‐perio CM revealed trends for increased detection of all but one cytokine (GPNMB), although none of these reached statistical significance ( p > 0.05 for all) (Figure ). Proteomic Profile of GF‐Perio Secretomes LC‐MS/MS revealed a total of 1833 proteins in GF‐perio CM, slightly more than in the healthy reference group ( n = 1817). Protein intensity correlation analysis showed a strong correlation between GF‐perio donors with an average Pearson R value of 0.93 (range 0.90–0.95). Compared to the healthy reference group, GF‐perio CM revealed: a. More exclusive proteins ( n = 127 vs. 111), that is, proteins detected only in GF‐perio CM, and b. More DEPs ( n = 73 vs. 0), that is, proteins with significantly greater abundance (Figure , Table ). Functional Analysis of Proteins in GF‐Perio Secretomes GO profiling of the total proteins in GF‐perio CM revealed enrichment of specific categories according to CC, MF, and BP. Among the top 10 enriched categories certain CC (“extracellular matrix”), MF (“protein binding”), and BP terms (“adaptive immune response”) were directly related to wound healing (Table ). Next, FEA was performed to determine specifically enriched categories of proteins among the DEPs (relative to the reference group) and exclusive proteins (independent of the reference group) in GF‐perio CM. Among the top 10 enriched categories in DEPs, the CC categories of “exosomes” and “cytoplasm” were statistically significant ( p < 0.05 according to multiple testing) (Table ). Similarly, among the exclusive proteins, the CC categories of “exosomes,” “lysosomes” and “cytoplasm,” and the MF category of “chaperone activity,” were significantly enriched ( p < 0.05 according to multiple testing) in GF‐perio CM (Table ). Proteins in GF‐Perio Secretomes Related to Wound Healing and Bone Regeneration Several key mediators of wound healing were detected in GF‐perio CM, such as growth factors [transforming growth factor beta (TGFβ), bone morphogenic protein (BMP), vascular endothelial growth factor (VEGF), insulin‐like growth factor (IGF), and platelet‐derived growth factor (PDGF) family proteins], cell adhesion molecules [cadherins (CDH2, CDH6, CDH11, CDH13), tetranectin (CLEC3B), fibronectin (FN1), integrins, etc.], and ECM proteins [collagens (COL), collagenases/metalloproteinases (MMP), tissue inhibitors of metalloproteinases (TIMP), etc.] (Tables and ). Moreover, among the top 10 DEPs in GF‐perio were important cell adhesion molecules, such as cadherin‐5/VE‐cadherin (CDH5), tetranectin (CLEC3B), and ECM proteins, such as SPARC‐like protein‐1 (SPARCL1) (Table ). While a majority of the relevant wound healing‐related proteins were identified in both GF‐perio and GF‐healthy CM, some differences were observed between the groups among the exclusive proteins (Table ). For example, certain growth factors [e.g., platelet‐derived growth factor C (PDGFC), neuroligin‐2 (NLGN2)] were only detected in GF‐perio CM, while others [platelet‐derived growth factor receptor‐like protein (PDGFRL), fibroblast growth factor‐7 (FGF7), hepatoma‐derived growth factor‐related protein‐3 (HDGFL3), vascular endothelial growth factor receptor‐2 (KDR)] were only detected in the healthy reference group. To validate the LC‐MS data, a multiplex immunoassay of bone‐related cytokines was used. Six of the included cytokines revealed consistent values for all tested samples; these included proteins related to the inflammation [interleukins (IL6, IL8)], cell proliferation [fibroblast growth factor‐2 (FGF2), monocyte chemoattractant protein‐1 (MCP1)/CC motif chemokine ligand‐2 (CCL2), and ECM components [matrix metallopeptidase‐2 (MMP2), osteoactivin/transmembrane glycoprotein NMB (GPNMB)]. Compared to the healthy reference group, GF‐perio CM revealed trends for increased detection of all but one cytokine (GPNMB), although none of these reached statistical significance ( p > 0.05 for all) (Figure ). Discussion The objective of the present study was to characterize the secretomes of GF‐perio using LC‐MS/MS. Several growth factors, cytokines, chemokines, and extracellular matrix proteins important for wound healing and regeneration were identified in GF‐perio CM. The presence of selected bone‐related proteins was confirmed via an immunoassay. Relative to a reference group of healthy GF, significant enrichment of specific protein categories, particularly exosomes, was observed in GF‐perio CM. Thus, the present findings offer relevant insights on the secretomes of GF‐perio with implications for use in regenerative therapies. Previous studies have characterized the proteomic profiles of GF (cell‐lysates) (McKnight et al. ) and their CM (secretomes) (Onyedibe et al. ) from healthy donors. Recently, Bao et al (Bao et al. ). compared the protein profiles of in situ gingival biopsies harvested from healthy and periodontally‐diseased sites. Consistent with our findings, the authors reported similar global profiles between healthy and diseased sites and a similar number of DEPs ( n = 69), mostly detected in the diseased sites (Bao et al. ). Functional profiling also revealed enrichment of similar GO categories and components as in the present study. These consistent findings are especially interesting given the differences in study designs. While in the former study (Bao et al. ) proteins were analyzed directly from in situ harvested gingival tissues, the secreted proteins in the present study were obtained from in vitro cultured GF. While it may be argued that the in vitro culture process introduced some modifications in the properties and secretory profiles of the cells, based on the above findings, it may be speculated that the inherent (in vivo) secretory profiles of GF are largely unaltered following in vitro culture. Nevertheless, the present study findings are relevant for regenerative therapy protocols where secretomes/CM are produced via ex vivo culture expansion of cells harvested from tissue biopsies. In the context of wound healing, GF‐perio CM revealed the presence of several proteins related to the different phases (Table ). Wound healing is a complex and dynamic process comprising four interconnected phases, that is, hemostasis, inflammation, proliferation/angiogenesis, and ECM synthesis/remodeling (Velnar et al. ). Of particular interest was the detection of several growth factors of the TGFβ‐, BMP‐, VEGF‐, IGF‐, PDGF‐, and CCN‐family, important for cell proliferation and differentiation (Barrientos et al. ; Linkhart et al. ). Moreover, relative to the reference group (GF‐healthy CM), a number of cell adhesion molecules (CDH5, CLEC3B) and ECM proteins (SPARCL1) were significantly overexpressed in GF‐perio CM. Functional analysis of DEPs also revealed a significant enrichment of proteins related to exosomes in GF‐perio CM. This is of particular interest, given the emerging role of EVs in wound healing (Lu et al. ) and in periodontal and bone regenerative therapies (Wang, Chen, et al. ; Wang, Cao, et al. ). In addition to soluble proteins, GF is known to release EVs, including exosomes, to mediate their paracrine functions (Yin et al. ; Zhuang and Zhou ; Sun et al. ). A post‐hoc analysis confirmed that at least 85 of the “top 100 EV proteins” according to Vesiclepedia (Chitti et al. ) were present in GF‐perio CM (Table ). In light of these data, further characterization of the EV‐component of GF‐perio CM and its effects on target cells is warranted. It is relevant to discuss the present findings in the context of the defining clinical factor characterizing GF‐perio, that is, inflammation. These cells were obtained from the periodontally affected teeth of patients undergoing surgery. According to current guidelines (Sanz et al. ), these teeth were assigned to a surgical intervention since they presented with persisting signs of inflammation (PPD > 5 mm and/or BoP), despite previous nonsurgical instrumentation and adequate plaque control. Since GF were isolated from connective tissue biopsies collected during access flap surgery around periodontally affected teeth, it is reasonable to assume that these cells were obtained from a microenvironment characterized by, among other things, inflammation and hypoxia (Celik and Kantarci ). In context, inflammation (via cytokine stimulation) and hypoxia are among the most commonly reported “preconditioning” strategies to enhance the therapeutic efficacy of cells and their secretomes (Chen et al. ; Hertel et al. ; Pulido‐Escribano et al. ). The mechanisms implicated in these effects have included, in the case of hypoxia, enhanced stemness and differentiation potential, and, in the case of cytokine stimulation, a shift towards a more “anti‐inflammatory phenotype” and enhance immune modulation (Chen et al. ). Indeed, these changes are also reflected in the secretomes and EVs of the preconditioned cells (Long and Wang ). Whether the molecular findings of the present study, for example, enrichment of exosomes in GF‐perio CM, could be partly attributed to some form of inflammatory “preconditioning,” is presently speculative and requires further investigation. It is also relevant to discuss the present findings in the context of the evidence of fibroblast heterogeneity in the periodontium. The relevance of different fibroblast subpopulations (based on lineage, development, differentiation potential, response to stimuli, etc.) has been extensively discussed in the context of both periodontal health and disease (McCulloch and Bordin ; Phipps et al. ; Lekic et al. ). Such heterogeneity has been reported not only between fibroblasts from different tissue components (e.g., PDL vs. gingiva) but also within the same tissue. In the gingiva, different fibroblast subtypes have been identified, with additional differences being detected in healthy and diseased tissues (Hakkinen and Larjava ; Hassell and Stanek ). While a majority of the fibroblast subtypes have been characterized based on in vitro differences in morphology, proliferation rate and differentiation potential, some evidence also suggests that these subtypes may differ in their secretory properties (Lekic et al. ). Thus, certain fibroblast subtypes may dominate during specific phases of a healthy or disease state, thereby influencing the secretory profile. However, reliable identification of these subtypes in vivo has been limited by the lack of highly specific surface markers which could allow their differential detection. Using emerging techniques (e.g., single‐cell analysis), future studies may reveal distinct fibroblast subpopulations in the gingiva (in health and disease) with corresponding differences in secretory profiles. With regard to the methodology, a systematic and comprehensive bioinformatic approach was used to analyze the LC‐MS/MS data in the present study. Moreover, proteomic findings were validated using a more conventional ELISA‐based immunoassay. Nevertheless, some study limitations must be acknowledged. Firstly, the number of GF donors ( n = 6) was limited. Although recent comparative studies of proteomic data have reported similar donor numbers (Lertruangpanya et al. ; Chen et al. ; Shin et al. ), inclusion of additional donors may have provided a clearer picture of donor‐related variations in GF‐perio secretomes. Moreover, the “healthy” reference GFs were obtained from a biobank of previously isolated cells from clinically healthy gingival tissues and not from age‐ and gender‐matched controls to the GF‐perio donors. While “donor‐matched” (same patients) cell harvesting may have reduced some potential bias, the reliability of obtaining truly “healthy” tissue controls in periodontitis patients, especially severe‐to‐advanced cases (Stage III–IV) requiring surgery, may be questioned (Bao et al. ). Additionally, no “functional” assay, for example, in vitro wound healing, gene expression, was performed to investigate the effects of GF‐perio CM on relevant target cells, for example, osteoblasts or PDL fibroblasts. A bioassay (e.g., RNA‐sequencing) to detect differential effects of GF‐perio and GF‐healthy secretomes on such target cells would be interesting for future research. Finally, GF in the present study were cultured in vitro in plastic adherent and serum‐supplemented conditions, which may not reflect the true in vivo scenario but represents a more clinically relevant approach since secretomes for clinical applications are produced from ex vivo culture‐expanded cells under GMP conditions (Sagaradze et al. ). Conclusions Within its limitations, the present study demonstrates that, in addition to several proteins important for wound healing and bone regeneration, the secretome of GF from periodontally diseased tissues is significantly enriched for proteins related to exosomes. Since gingival biopsies can be easily obtained during periodontal surgery, the secretomes of GF‐perio represent a promising biological therapy for regenerative applications. Further investigation of their potency, along with efficacy testing in relevant in vitro and in vivo models, is warranted. Siddharth Shanbhag, Dagmar Fosså Bunæs and Kamal Mustafa conceived and designed the study. Anne Kari Smedås, Lovise Gangeskar Paris, and Siddharth Shanbhag performed the experiments. Anne Kari Smedås, Lovise Gangeskar Paris, Dagmar Fosså Bunæs, Niyaz Al‐Sharabi, and Siddharth Shanbhag contributed to data analysis. All authors contributed to the interpretation of data and writing. All authors read and approved the final manuscript. The use of human cells and tissues was approved by the Regional Committees for Medical Research Ethics (REK) in Norway (2011/1516/REK and 2016/1267/REK‐nord). The authors have nothing to report. The authors declare no conflicts of interest. Supporting information. Supporting information. |
Physicians’ preventive practices: more frequently performed for male patients and by female physicians | 431dcc85-006c-4621-8372-05a6bdcfde1b | 7168941 | Preventive Medicine[mh] | Numerous studies have examined the influence of the gender of both the patient and the physician on the quality of preventive care dispensed. Some preventive procedures appear to be performed more frequently among men and others among women. Men thus appear to receive preventive cardiovascular care and advice about tobacco and alcohol use more frequently than women but diet and lifestyle advice less often . The physician’s gender also appears to be associated with the provision of preventive care. On the whole, women doctors appear to provide this care more often than their male colleagues, as demonstrated for cardiovascular prevention , cancer screening , and vaccination . This result has nonetheless not been found systematically, and some aspects of prevention, such as prevention of overweight/obesity and advice about cessation or reduction of smoking and drinking , do not seem to differ according to the physician’s gender. Beyond these somewhat contradictory observations, other aspects merit closer study. One of these is the combined effect of the gender of the patient and the doctor, currently the object of two hypotheses. First, effects of projection or identification might cause preventive care to be more frequent when the genders of each protagonist are concordant, as reported by a study of dietary-lifestyle advice and successful attainment of glycemic and blood-pressure goals . Other studies, including some on this same dietary-lifestyle counseling topic , have reported different results. Second, in the area of cardiovascular prevention, for example, practice differs less between patients of each gender when their primary care physicians are women compared with men . To our knowledge, such analyses have not yet been conducted in other areas of prevention. Finally, we assume that women physicians may have more consistent preventive practices among themselves than their male counterparts . To our knowledge, the variation in physicians’ preventive practices according to their gender has never been studied. The overall objective of this study was to analyze the gender differences in the preventive of French general practitioners (GPs). More specifically, we sought to analyze the association between their preventive practices, on the one hand, and, on the other, the patient’s gender, the GP’s gender, and the combined effect of both, as well as to study the homogeneity of GPs’ preventive practices according to their gender. Design This study is an ancillary analysis of data from an observational survey named PrevQuanti, designed to assess social inequalities in the preventive care — screening for breast and cervical cancer , tobacco, and alcohol consumption , and cardiovascular risk — provided by GPs to patients aged 40–74 years . A power calculation determined that we would require 50 GPs and 70 patients per GP to be able to demonstrate social gradients for the types of preventive care studied . PrevQuanti was conducted in 2008–09 among GPs who supervised students training in general practice during an internship at their office. We used email and telephone to recruit GPs working with two medical school departments of general practice in the Paris metropolitan area (who were paid 300 € for their time). For each participating GP, a random sample of 35 men and 35 women was drawn from their patient list (patients who had reported them to be their regular GP), furnished by the national health insurance fund. In practice, we used the “random” function of Excel for the random drawing of patients from the list for each GP. There were no exclusion criteria. GPs’ characteristics We collected the following GP characteristics, through a self-administered questionnaire: age, sex, mean duration of consultations, mean number of consultations weekly, and office location. Patients’ characteristics A data collection template was used to extract characteristics of these patients’ medical management from their files: number of visits during the past year, length of follow-up (management), and the preventive practices performed. A questionnaire mailed to patients’ homes (for self-administration) collected various social characteristics, such as educational level . Statistical analysis In a preceding article we studied the assessment of cardiovascular risk for primary prevention according to the gender of patients and their GPs . Here we continue our study of preventive care dispensed in general practice from a perspective focused on gender, but extend it on the one hand to other domains of prevention (overweight, alcohol consumption, diet, and gynecologic cancer screening) and on the other hand to the entire set of patients (without excluding those at high cardiovascular risk, as during our previous work). Binary dependent variables (that is, variables to be explained), obtained from the medical files, were used to describe the GPs’ preventive practices. These variables were selected from RAND’s Quality Assessment Tools, a set of quality indicators of preventive and chronic disease care developed and evaluated in the United States . They are measures of process (more than result) and represent concrete activities that clinicians control rather directly . Five domains of prevention (with 2 dependent variables per domain) were considered: weight management with weight and waist circumference measurements (ever documented in the file, regardless of when); substance use, with smoking and alcohol consumption status documented (ever documented in the file; smoking status was considered in our previous article); lifestyle recommendations, with provision of diet and physical activity advice (documented in the file within the past 3 years); cardiovascular risk with fasting blood glucose and cholesterol measurements (documented in the file within the past 5 years; both variables considered in the previous article); and gynecological cancer screening with cervical smear and mammography dates documented (ever documented in the file).We also constructed an aggregate preventive score. Calculated at the patient level, it was the percentage of the preventive practices performed among all the dependent variables applicable to both genders. All of the analyses were performed with mixed logistic models with a random intercept, adjusted for characteristics of patients and GPs known to be associated with the dependent variables studied. The patient characteristics used for adjustment were: age (in 5-year groups), body mass index (BMI < 25 kg/m 2 - [25–30] - > 30) , educational level (did not pass the “bac” school-leaving exam, passed the “bac”, university level) , the annual number of visits (0, [1, 2], ≥3) , the length of the patient-GP relationship ([0–1[, [1, 2], ≥3 years). The GP characteristics used for adjustment were: age (< 50 years,]50–60], > 60) , mean duration of consultation (≤ 20 min vs > 20), mean number of visits weekly ([50–70],]70–100], > 100) , and location of office (Paris vs suburbs) . The absence of strong colinearity between these characteristics was verified by measuring Variance Inflation Factors (maximum: 2.3). The dependent variables were analyzed first according to the gender of the patient and the GP (with comparison of the inter-GP variances among male and female GPs; our preceding article did not perform this analysis of variance) and then according to the gender of both combined into pairs. The statistical analyses were performed with SAS software, v. 9.4. The advisory committee for information treatment for health research (commission nationale de l’informatique et des libertés) approved the study, and all patients signed informed consents. This study is an ancillary analysis of data from an observational survey named PrevQuanti, designed to assess social inequalities in the preventive care — screening for breast and cervical cancer , tobacco, and alcohol consumption , and cardiovascular risk — provided by GPs to patients aged 40–74 years . A power calculation determined that we would require 50 GPs and 70 patients per GP to be able to demonstrate social gradients for the types of preventive care studied . PrevQuanti was conducted in 2008–09 among GPs who supervised students training in general practice during an internship at their office. We used email and telephone to recruit GPs working with two medical school departments of general practice in the Paris metropolitan area (who were paid 300 € for their time). For each participating GP, a random sample of 35 men and 35 women was drawn from their patient list (patients who had reported them to be their regular GP), furnished by the national health insurance fund. In practice, we used the “random” function of Excel for the random drawing of patients from the list for each GP. There were no exclusion criteria. We collected the following GP characteristics, through a self-administered questionnaire: age, sex, mean duration of consultations, mean number of consultations weekly, and office location. A data collection template was used to extract characteristics of these patients’ medical management from their files: number of visits during the past year, length of follow-up (management), and the preventive practices performed. A questionnaire mailed to patients’ homes (for self-administration) collected various social characteristics, such as educational level . In a preceding article we studied the assessment of cardiovascular risk for primary prevention according to the gender of patients and their GPs . Here we continue our study of preventive care dispensed in general practice from a perspective focused on gender, but extend it on the one hand to other domains of prevention (overweight, alcohol consumption, diet, and gynecologic cancer screening) and on the other hand to the entire set of patients (without excluding those at high cardiovascular risk, as during our previous work). Binary dependent variables (that is, variables to be explained), obtained from the medical files, were used to describe the GPs’ preventive practices. These variables were selected from RAND’s Quality Assessment Tools, a set of quality indicators of preventive and chronic disease care developed and evaluated in the United States . They are measures of process (more than result) and represent concrete activities that clinicians control rather directly . Five domains of prevention (with 2 dependent variables per domain) were considered: weight management with weight and waist circumference measurements (ever documented in the file, regardless of when); substance use, with smoking and alcohol consumption status documented (ever documented in the file; smoking status was considered in our previous article); lifestyle recommendations, with provision of diet and physical activity advice (documented in the file within the past 3 years); cardiovascular risk with fasting blood glucose and cholesterol measurements (documented in the file within the past 5 years; both variables considered in the previous article); and gynecological cancer screening with cervical smear and mammography dates documented (ever documented in the file).We also constructed an aggregate preventive score. Calculated at the patient level, it was the percentage of the preventive practices performed among all the dependent variables applicable to both genders. All of the analyses were performed with mixed logistic models with a random intercept, adjusted for characteristics of patients and GPs known to be associated with the dependent variables studied. The patient characteristics used for adjustment were: age (in 5-year groups), body mass index (BMI < 25 kg/m 2 - [25–30] - > 30) , educational level (did not pass the “bac” school-leaving exam, passed the “bac”, university level) , the annual number of visits (0, [1, 2], ≥3) , the length of the patient-GP relationship ([0–1[, [1, 2], ≥3 years). The GP characteristics used for adjustment were: age (< 50 years,]50–60], > 60) , mean duration of consultation (≤ 20 min vs > 20), mean number of visits weekly ([50–70],]70–100], > 100) , and location of office (Paris vs suburbs) . The absence of strong colinearity between these characteristics was verified by measuring Variance Inflation Factors (maximum: 2.3). The dependent variables were analyzed first according to the gender of the patient and the GP (with comparison of the inter-GP variances among male and female GPs; our preceding article did not perform this analysis of variance) and then according to the gender of both combined into pairs. The statistical analyses were performed with SAS software, v. 9.4. The advisory committee for information treatment for health research (commission nationale de l’informatique et des libertés) approved the study, and all patients signed informed consents. Description of GPs The first 52 GPs who volunteered to participate were included in the study. We review here only the essential aspects of the description of the GPs and their comparison according to gender which have already been presented. Their mean age was 55 years (standard deviation (SD) = 6), and 63% of them were men. Their mean duration of consultations was 21 min (SD = 4.8), and on average they saw 92 (SD = 23) patients weekly. Moreover, the consultations by women GPs lasted longer than those of their male colleagues ( P = 0.02). Description of patients For the 3640 randomly selected patients, GPs returned 3600 questionnaires (98.8%), and patients 2605 (71.5%). Finally, data were collected from both the patient and GP for 71.4% ( n = 2599) of the patients included. Patients’ mean age was 53.9 years (SD = 9.5) (Table ). Men had been seeing their GPs for a significantly shorter period than the women and had a significantly higher body mass index (BMI). Gender differences Overall, male patients received preventive care significantly more often than women (Table ). These differences were most marked in the domain of substance use. Moreover, preventive practices were more frequent among women GPs than their male colleagues. These gender differences were nonetheless significant only for smoking status, cardiovascular risk variables, and gynecological cancer screening. The aggregate preventive score was higher for male patients (odds ratio (OR) 1.60, 95% confidence interval (95% CI) 1.47–1.75, P < 10 − 4 ) and female GPs (OR 1.35, 95% CI 1.05–1.73, P = .02). Except for weight management, the preventive practices explored varied significantly according to the gender composition of the patient-GP pair (Table ). We thus observed a gradient according to the gender composition of these pairs: the woman patient-male GP pairs systematically had the least frequent preventive practices; the next most frequent was the matching female pairs (patient and GP both women), and then the male pairs. Finally, the male patient-female GP pairs had the most frequent prevention practices. Regardless of the preventive practice considered, the amplitude of the differences between the patients of each gender was globally similar for male and female GPs. For the aggregate preventive score, the OR for male compared with female patients of male GPs was 1.42 and of female doctors, 1.57 (= 1.99/1.27). These results were confirmed by non-significant findings for the tests of interaction between the patients’ and GPs’ genders (results not shown). Inter-GP variance In the models adjusted for patient characteristics (i.e., taking into account a possible effect of the composition of the patient panels for these characteristics) and GP characteristics, the variance between male GPs was greater than that between women GPs for 7 of the 10 preventive practices analyzed and significantly greater for 4 of them (Table ). The preventive practices of female GPs were more consistent and homogeneous than those of their male colleagues in terms of lifestyle recommendations and in the cardiovascular risk domain. The first 52 GPs who volunteered to participate were included in the study. We review here only the essential aspects of the description of the GPs and their comparison according to gender which have already been presented. Their mean age was 55 years (standard deviation (SD) = 6), and 63% of them were men. Their mean duration of consultations was 21 min (SD = 4.8), and on average they saw 92 (SD = 23) patients weekly. Moreover, the consultations by women GPs lasted longer than those of their male colleagues ( P = 0.02). For the 3640 randomly selected patients, GPs returned 3600 questionnaires (98.8%), and patients 2605 (71.5%). Finally, data were collected from both the patient and GP for 71.4% ( n = 2599) of the patients included. Patients’ mean age was 53.9 years (SD = 9.5) (Table ). Men had been seeing their GPs for a significantly shorter period than the women and had a significantly higher body mass index (BMI). Overall, male patients received preventive care significantly more often than women (Table ). These differences were most marked in the domain of substance use. Moreover, preventive practices were more frequent among women GPs than their male colleagues. These gender differences were nonetheless significant only for smoking status, cardiovascular risk variables, and gynecological cancer screening. The aggregate preventive score was higher for male patients (odds ratio (OR) 1.60, 95% confidence interval (95% CI) 1.47–1.75, P < 10 − 4 ) and female GPs (OR 1.35, 95% CI 1.05–1.73, P = .02). Except for weight management, the preventive practices explored varied significantly according to the gender composition of the patient-GP pair (Table ). We thus observed a gradient according to the gender composition of these pairs: the woman patient-male GP pairs systematically had the least frequent preventive practices; the next most frequent was the matching female pairs (patient and GP both women), and then the male pairs. Finally, the male patient-female GP pairs had the most frequent prevention practices. Regardless of the preventive practice considered, the amplitude of the differences between the patients of each gender was globally similar for male and female GPs. For the aggregate preventive score, the OR for male compared with female patients of male GPs was 1.42 and of female doctors, 1.57 (= 1.99/1.27). These results were confirmed by non-significant findings for the tests of interaction between the patients’ and GPs’ genders (results not shown). In the models adjusted for patient characteristics (i.e., taking into account a possible effect of the composition of the patient panels for these characteristics) and GP characteristics, the variance between male GPs was greater than that between women GPs for 7 of the 10 preventive practices analyzed and significantly greater for 4 of them (Table ). The preventive practices of female GPs were more consistent and homogeneous than those of their male colleagues in terms of lifestyle recommendations and in the cardiovascular risk domain. Our results show that male patients and female GPs are associated with the most frequent performance of numerous types of preventive care. There was no combined effect of the two protagonists: the effect of the patient’s gender is identical among male and female GPs; similarly, the effect of the GP’s gender is identical among male and female patients. Accordingly, women patients of male GPs appear to be the patient group least frequently receiving the types of preventive care we studied. Furthermore, it does not appear that the concordance of patient and GP gender influences preventive practices. Finally, women GPs seem to be more consistent in their delivery of preventive practices than their male colleagues. The results according to patient gender reinforce the findings of studies of the inequality of preventive care that disadvantages women. We must nonetheless point out that our result for the aggregate preventive score disagrees with that of a study conducted among a younger sample of Americans (mean age 45 years); this disparity may be related to interaction between patient age and gender for prevention or to differences in the organization of care between the two countries . Two complementary hypotheses may explain our observations favorable to male patients. The first posits that they may receive more preventive care because epidemiologic evidence indicates that they need it most . According to the second hypothesis, GPs target men for prevention more often because they consider them to be less informed, less aware of risks, and less receptive. It is difficult to determine which of these hypotheses is predominant. The two together probably persuade GPs that the stakes of preventive care are higher for men. Although women’s rates of smoking and drinking are tending to catch up with those of men (the consumption of these two substances increased among women and remained stationary among men, especially for smoking) , it is in the area of substance use that physicians’ preventive practices differ most between genders . This may correspond to their lack of knowledge about recent modifications in these behaviors or to the application of a double standard, that is, a situation in which identical behaviors are judged differently as a function of gender. Excessive alcohol consumption, for example, is much more stigmatizing, even taboo, among women . More or less consciously, such social representations may dissuade physicians from questioning their women patients about their alcohol consumption . Our results about the GP’s gender are consistent with the literature and underline the positive influence of women physicians in the area of preventive care. While they provide more of this type of care than their male counterparts, only half of these differences are statistically significant. This lack of significance may be due to the relatively small number of GPs, which limits the power of statistical tests (in hierarchical models with random effects, power is linked above all else to the number of GPs analyzed) . For the same reason, the greater homogeneity of preventive practices among women GPs, measured by the variance between them, should perhaps be analyzed in a larger GP sample. Several sociological explanations help to interpret our findings related to GP gender. Educational practices in school promote originality more in boys and obedience in girls and may thus contribute to the propensity of male GPs to distance themselves from guidelines and that of women to comply with them. The traditional social roles of women also tend to construct a relation to time more oriented toward the planning of tasks and marked by a concern about consequences, which tends to be an asset in the area of prevention. Finally, although women today comprise a substantial portion of the medical world, their entry into this professional universe dominated by men is nonetheless recent . In a quest for legitimacy, they may have had to demonstrate their willingness to conform to professional rules and especially guidelines . Next, specifically for gynecological cancer screening, it is possible that the lack of investment by male GPs may be in part due to their female patients’ preference to have these tests performed by women . Finally, another part of these gender-based differences in preventive practices may be due to women’s more thorough recording of their practices. Women’s dispositions related to writing (effective note-taking acquired during their student life , writing lists and managing family organization in the domestic sphere) and differences in their organization of their professional practice (shorter working hours leading to greater file-sharing) may combine to makes preventive management by women more traceable. Our simultaneous analysis of patient and GP gender does not show that their concordance influenced care. Our observations thus do not support the hypothesis that a mirror-effect between patient and GP favors quality care. Moreover, although women GPs appear to provide more preventive care, they do so in a manner as inegalitarian, that is, favoring male patients, as their male colleagues. These differences raise issues beyond their ethical aspects, for recent work has demonstrated that some risk factors have a stronger, more negative effect on women than men. This is the case, for example, for diabetes as a cardiovascular risk factor and for smoking and lung cancer . Our study has also some strengths: its setting in general practice, different from the preceding studies, which have taken place essentially in hospitals or with specialists; consideration of the hierarchical structure and the non-independence of patients with mixed models; and the numerous adjustments for patient and GP characteristics known to be associated with prevention, which limit the possibility of residual confounding. Our study also has several limitations. First, our analyses are not adjusted for patients’ individual history of diabetes and cardiovascular disease, although these are factors that underlie physicians’ preventive practices. We conducted sensitivity analyses by adjusting for this history and determined that it did not modify our results. Next, it did not take into account the health behaviors of the GPs, which are nonetheless likely to modify their preventive practices. Male GPs smoked more than their female colleagues and the current or previous smoking status of doctors affects their investment in smoking cessation . Accordingly, not considering this type of characteristic is likely to lead to overestimating the differences between male and female GPs. In France, prevention is one of the missions explicitly assigned to GPs. There is no type of facility or practice specifically devoted to prevention, and the use of practice nurses in GPs’ offices is just barely beginning in this country. Although access to specialists is relatively easy, only gynecologists will contribute to the gynecological cancer screening of a significant proportion of the population. Although these specificities related to health care in France may present problems about the generalization of our results to other countries, the fact that French GPs are relatively alone in ensuring preventive care enables the study of the influence of their gender about the preventive care dispensed. Moreover, the epidemiologic data about healthy behaviors globally less adopted by men and the male domination of the medical world, that is, the low concentration of women physicians within some specializations and their limited access to leadership positions in academic medicine are widely shared in much of the West. Consequently, although the male-female gaps observed may vary from one country to another, our results are probably at least qualitatively generalizable. In this work, we adopted an analysis founded on equality (each managed the same) because the guidelines do not indicate any reason for different management according to the patients’ gender. Equity (to each according to his/her need) might also be an interesting point of view for this analysis, insofar as the needs for prevention are not necessarily the same for men and women. This may lead us to subsequent work on this topic. In conclusion, physicians, especially those providing primary care, must be made aware of these differences in their preventive practices, in relation to both their patients’ gender and their own. They must realize that the way they approach prevention, health behaviors, and patients’ risky practices is rarely neutral in relation to gender and that it directly affects their medical activities. Although the increasing feminization of medicine should result in the increased use of preventive care, the differences in practices prejudicial to women may well continue, independently of demographic changes. To improve this situation, the influence of sex and gender (in their biological and social dimensions) on health and illness and its implications for practice should be covered more thoroughly in the medical school curriculum. |
A training programme facilitating guideline use of occupational health professionals: a feasibility study | 0b2a9ca8-b6bc-47a0-9003-cc75925f7c8e | 6169000 | Preventive Medicine[mh] | Previous research has shown that having a chronic disease negatively affects work participation, as people with a chronic disease are less often employed and, when they are employed, experience difficulties in meeting physical or psychosocial work demands . Occupational health professionals (OHPs) may support such people to improve their work participation. In the Netherlands, there are two types of OHPs involved: occupational physicians (OPs), who provide guidance to individuals to support work retention or return to work, and insurance physicians (IPs), who conduct a work ability assessment of individuals with a chronic disease. The provision of recent and relevant evidence can support OHPs in their guidance or assessment tasks. Several guidelines have been developed, incorporating recent evidence, with the aim of improving the quality of guidance or assessment given by OHPs . One of these guidelines is the ‘Work participation of people with a chronic disease’ guideline , which aims to support the work participation of people with a chronic disease. The guideline includes an overview of factors, interventions and input on collaboration among professionals to promote the work participation of individuals with a chronic disease, irrespective of their specific diagnosis. Although the use of knowledge and skills provided by a guideline can lead to a higher quality of occupational care , guideline adherence by OHPs is generally low . Previous studies have shown that guideline use is influenced by various factors that may act as barriers, which are related to the professional, the individual with a chronic disease, or to the knowledge included in the guideline . One of these barriers is a lack of knowledge or skills of OHPs , which influences their capability, motivation and opportunity to use the evidence from the guideline in practice . The knowledge and skills provided by a guideline might thus act to enhance practice, but studies recognize that active strategies are needed to increase their uptake and use . In this respect, multiple educational methods have been found to be effective in facilitating learning . On this basis, we developed a training programme to facilitate OHPs’ capability, to increase use of the guideline mentioned above and the knowledge and skills it provided. Before focusing on implementation on a large scale, Grol and Wensing recommend first testing and running such a training programme with a smaller sample to evaluate whether the programme is a feasible approach to facilitate OHPs’ knowledge and skills. In addition, performing a feasibility study provides valuable information on how the trainees perceive the programme, and whether they consider it to have contributed to their knowledge and daily practice . Bowen states that there are eight aspects which can be addressed in a feasibility study, namely: acceptability, demand, implementation, practicality, adaptation, integration, expansion and limited-efficacy testing . These aspects measure how a training programme is perceived by the trainees, whether the training programme can be carried out as intended, whether it fits with the current system, whether it can be adapted for another target group, and whether it shows promise of being successful. As our aim was to study whether the training programme is feasible in facilitating OHPs’ use of the knowledge and skills provided by the guideline, we focused on the aspects of ‘acceptability’, ‘implementation’ and ‘limited efficacy’. Acceptability is a common area of interest in feasibility studies , which focuses on whether trainees – in our case OHPs – perceive the training programme as helpful and as valuable to their daily practice. We also evaluated the aspect of ‘implementation’ to explore whether trainees perceive that the training programme could be implemented on larger scale. Finally, we studied limited efficacy to evaluate whether, in a smaller sample of the intended population (i.e. OHPs), the training programme shows effectiveness in terms of an improvement in the participants’ knowledge and skills . The study aims to answer the research question: What levels of perceived acceptability, implementation potential and limited efficacy does our training programme for OHPs have, with respect to its aim of facilitating the use of knowledge and skills provided by a guideline? Feasibility of the training programme was evaluated using an observational design. Acceptability and implementation of the training programme were explored after the training programme, as trainees’ perception of the training could only be reported after experiencing the programme. Limited efficacy was measured using a one-group pre-post design by researching the level of knowledge and skills of trainees at baseline (T0), after reading the guideline (T1) and directly after completing the training activities (T2). The Medical Ethics Committee of the Academic Medical Center determined through a written statement that no ethical approval was required for this study (trial number: W17_081#17.100). Participants Based on Bowen et al. and Ruitenburg et al. we aimed to recruit a total of 20–40 participants, to be divided into two training groups at different training locations. As we aimed to include an equal number of OPs and IPs for each training programme, we used stratified sampling. OPs and IPs were recruited by contacting several professionals in the field, including a staff member from the professional association of OPs, a staff member of the national training institute for OHPs, and two staff IPs working in the regions in which the training programme was held. These people then invited OHPs from their network to join the study by sending them an email, including a standardized information letter, which contained all the relevant information about the study, the content of the study and the nature of the training programme. In addition, it stated that participation in the study was voluntary. The OHPs who were interested in participating could register by sending an email to the first researcher (MV). OPs and IPs were included if they had experience in the guidance or assessment of people with a chronic disease. Written informed consent was obtained from all participants included in this study. Training programme The training programme was developed in collaboration with OPs, IPs and experts in the field of education of professionals. The process of the development of the training programme has been reported in another article. In brief, as a first step, OP and IP training needs were explored by asking the OHPs what they would need to use the knowledge and skills provided by the guideline in practice. Based on the OHPs’ reported training needs, researchers formulated learning objectives as a second step (see Table ). Subsequently, experts in the field of education were interviewed to determine which training activities could be employed to best impart the knowledge and skills to OHPs. Finally, based on the input of both the OHPs and the experts, the learning objectives and teaching methods were integrated into a one-day training programme by the researchers. The training programme was provided by two trainers, an OP and an IP. The first researcher (MV) was present during both training programmes and provided an explanation regarding the content of the guideline as well as assisting the trainers when needed. The second researcher (DB) was present at one training location and assisted the trainers when needed. The protocol of the training programme is presented in Table . Feasibility To evaluate feasibility we researched ‘acceptability’, ‘implementation’ and ‘limited efficacy’ as outlined below: Acceptability To evaluate trainees perspective on the acceptability of the training programme, the OHPs were asked to indicate after the training (T2) to what extent they agreed with four statements on a 10-point visual analogue scale (VAS), with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. The statements were: a) ‘Because of the training programme, I am able to use the knowledge and skills provided by the guideline in my own guidance or assessment of people with a chronic disease’, b) ‘The training programme adheres to the daily practice of OHPs in their guidance and assessment of people with a chronic disease’, c) ‘The training programme is relevant to and useful in the guidance and assessment of people with a chronic disease’, d) ‘The training programme contributes to my knowledge and skills concerning the guidance and assessment of people with a chronic disease’. Mean scores and standard deviations were analysed using descriptive statistics (SPSS Statistics 24.0). Implementation To evaluate trainees perspective on whether the training programme could be implemented on a larger scale, the OHPs were asked to indicate after the training (T2) on a 10-point VAS to what extent the training programme could be implemented in practice, with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. In addition, the OHPs were asked to report through open ended questions which barriers to and facilitators they foresaw in implementation of the training programme on a larger scale. Mean scores and standard deviations on perceived implementation were analysed using descriptive statistics (SPSS Statistics 24.0). Answers to the open-ended questions regarding barriers to and facilitators of implementation were summarized, and similar concepts were grouped together manually by the first researcher (MV). This categorization of similar concepts was checked by the research team (DB, JH, HW, MF). Limited efficacy To evaluate whether the training programme had an effect, knowledge and skills of OHPs were measured at baseline (T0), after reading the guideline (T1) and directly after completion of the training activities (T2) using knowledge and skills tests. Each test included eight questions, five addressing knowledge and three addressing skills. The latter were addressed by asking the OHPs to apply their knowledge to a case study. Participants had to give short open-ended answers, which were scored between 0 and 2 points per question. Their performance was evaluated on the basis of the sum of all answers, resulting in a minimum total score of 0 and a maximum total score of 16 points. To achieve consistency and consensus between the researchers, a scoring rubric was used to assess the performance of the participants, which contained all of the correct answers to the questions based on the guideline. This was drafted by the first and second researchers (MV and DB). Both the questionnaire and rubrics were developed by two researchers (MV and DB). Questions and answers included in the tests and rubrics were directly derived from the guideline “Work participation of people with a chronic disease”, to prevent influence of the researchers. The formulated tests and rubrics were checked by the research team (JH, HW, MF). After the training programme, answers on the questions given by OHPs were scored for correctness by the second researcher (DB) and checked by the first researcher (MV). The total scores per measurement for the entire sample were compared between T0 and T1, and T1 and T2. Since the data were found to have a non-normal distribution, scores were analysed using a non-parametric Friedman test. Post-hoc tests were conducted using Wilcoxon signed rank tests to measure differences between T0 and T1, and T1 and T2 (two-tailed). Based on Bowen et al. and Ruitenburg et al. we aimed to recruit a total of 20–40 participants, to be divided into two training groups at different training locations. As we aimed to include an equal number of OPs and IPs for each training programme, we used stratified sampling. OPs and IPs were recruited by contacting several professionals in the field, including a staff member from the professional association of OPs, a staff member of the national training institute for OHPs, and two staff IPs working in the regions in which the training programme was held. These people then invited OHPs from their network to join the study by sending them an email, including a standardized information letter, which contained all the relevant information about the study, the content of the study and the nature of the training programme. In addition, it stated that participation in the study was voluntary. The OHPs who were interested in participating could register by sending an email to the first researcher (MV). OPs and IPs were included if they had experience in the guidance or assessment of people with a chronic disease. Written informed consent was obtained from all participants included in this study. The training programme was developed in collaboration with OPs, IPs and experts in the field of education of professionals. The process of the development of the training programme has been reported in another article. In brief, as a first step, OP and IP training needs were explored by asking the OHPs what they would need to use the knowledge and skills provided by the guideline in practice. Based on the OHPs’ reported training needs, researchers formulated learning objectives as a second step (see Table ). Subsequently, experts in the field of education were interviewed to determine which training activities could be employed to best impart the knowledge and skills to OHPs. Finally, based on the input of both the OHPs and the experts, the learning objectives and teaching methods were integrated into a one-day training programme by the researchers. The training programme was provided by two trainers, an OP and an IP. The first researcher (MV) was present during both training programmes and provided an explanation regarding the content of the guideline as well as assisting the trainers when needed. The second researcher (DB) was present at one training location and assisted the trainers when needed. The protocol of the training programme is presented in Table . To evaluate feasibility we researched ‘acceptability’, ‘implementation’ and ‘limited efficacy’ as outlined below: Acceptability To evaluate trainees perspective on the acceptability of the training programme, the OHPs were asked to indicate after the training (T2) to what extent they agreed with four statements on a 10-point visual analogue scale (VAS), with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. The statements were: a) ‘Because of the training programme, I am able to use the knowledge and skills provided by the guideline in my own guidance or assessment of people with a chronic disease’, b) ‘The training programme adheres to the daily practice of OHPs in their guidance and assessment of people with a chronic disease’, c) ‘The training programme is relevant to and useful in the guidance and assessment of people with a chronic disease’, d) ‘The training programme contributes to my knowledge and skills concerning the guidance and assessment of people with a chronic disease’. Mean scores and standard deviations were analysed using descriptive statistics (SPSS Statistics 24.0). Implementation To evaluate trainees perspective on whether the training programme could be implemented on a larger scale, the OHPs were asked to indicate after the training (T2) on a 10-point VAS to what extent the training programme could be implemented in practice, with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. In addition, the OHPs were asked to report through open ended questions which barriers to and facilitators they foresaw in implementation of the training programme on a larger scale. Mean scores and standard deviations on perceived implementation were analysed using descriptive statistics (SPSS Statistics 24.0). Answers to the open-ended questions regarding barriers to and facilitators of implementation were summarized, and similar concepts were grouped together manually by the first researcher (MV). This categorization of similar concepts was checked by the research team (DB, JH, HW, MF). Limited efficacy To evaluate whether the training programme had an effect, knowledge and skills of OHPs were measured at baseline (T0), after reading the guideline (T1) and directly after completion of the training activities (T2) using knowledge and skills tests. Each test included eight questions, five addressing knowledge and three addressing skills. The latter were addressed by asking the OHPs to apply their knowledge to a case study. Participants had to give short open-ended answers, which were scored between 0 and 2 points per question. Their performance was evaluated on the basis of the sum of all answers, resulting in a minimum total score of 0 and a maximum total score of 16 points. To achieve consistency and consensus between the researchers, a scoring rubric was used to assess the performance of the participants, which contained all of the correct answers to the questions based on the guideline. This was drafted by the first and second researchers (MV and DB). Both the questionnaire and rubrics were developed by two researchers (MV and DB). Questions and answers included in the tests and rubrics were directly derived from the guideline “Work participation of people with a chronic disease”, to prevent influence of the researchers. The formulated tests and rubrics were checked by the research team (JH, HW, MF). After the training programme, answers on the questions given by OHPs were scored for correctness by the second researcher (DB) and checked by the first researcher (MV). The total scores per measurement for the entire sample were compared between T0 and T1, and T1 and T2. Since the data were found to have a non-normal distribution, scores were analysed using a non-parametric Friedman test. Post-hoc tests were conducted using Wilcoxon signed rank tests to measure differences between T0 and T1, and T1 and T2 (two-tailed). To evaluate trainees perspective on the acceptability of the training programme, the OHPs were asked to indicate after the training (T2) to what extent they agreed with four statements on a 10-point visual analogue scale (VAS), with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. The statements were: a) ‘Because of the training programme, I am able to use the knowledge and skills provided by the guideline in my own guidance or assessment of people with a chronic disease’, b) ‘The training programme adheres to the daily practice of OHPs in their guidance and assessment of people with a chronic disease’, c) ‘The training programme is relevant to and useful in the guidance and assessment of people with a chronic disease’, d) ‘The training programme contributes to my knowledge and skills concerning the guidance and assessment of people with a chronic disease’. Mean scores and standard deviations were analysed using descriptive statistics (SPSS Statistics 24.0). To evaluate trainees perspective on whether the training programme could be implemented on a larger scale, the OHPs were asked to indicate after the training (T2) on a 10-point VAS to what extent the training programme could be implemented in practice, with 1 indicating ‘I completely disagree’ and 10 indicating ‘I completely agree’. In addition, the OHPs were asked to report through open ended questions which barriers to and facilitators they foresaw in implementation of the training programme on a larger scale. Mean scores and standard deviations on perceived implementation were analysed using descriptive statistics (SPSS Statistics 24.0). Answers to the open-ended questions regarding barriers to and facilitators of implementation were summarized, and similar concepts were grouped together manually by the first researcher (MV). This categorization of similar concepts was checked by the research team (DB, JH, HW, MF). To evaluate whether the training programme had an effect, knowledge and skills of OHPs were measured at baseline (T0), after reading the guideline (T1) and directly after completion of the training activities (T2) using knowledge and skills tests. Each test included eight questions, five addressing knowledge and three addressing skills. The latter were addressed by asking the OHPs to apply their knowledge to a case study. Participants had to give short open-ended answers, which were scored between 0 and 2 points per question. Their performance was evaluated on the basis of the sum of all answers, resulting in a minimum total score of 0 and a maximum total score of 16 points. To achieve consistency and consensus between the researchers, a scoring rubric was used to assess the performance of the participants, which contained all of the correct answers to the questions based on the guideline. This was drafted by the first and second researchers (MV and DB). Both the questionnaire and rubrics were developed by two researchers (MV and DB). Questions and answers included in the tests and rubrics were directly derived from the guideline “Work participation of people with a chronic disease”, to prevent influence of the researchers. The formulated tests and rubrics were checked by the research team (JH, HW, MF). After the training programme, answers on the questions given by OHPs were scored for correctness by the second researcher (DB) and checked by the first researcher (MV). The total scores per measurement for the entire sample were compared between T0 and T1, and T1 and T2. Since the data were found to have a non-normal distribution, scores were analysed using a non-parametric Friedman test. Post-hoc tests were conducted using Wilcoxon signed rank tests to measure differences between T0 and T1, and T1 and T2 (two-tailed). Participants A total of 38 participants joined the study, of which 20 worked as OPs, 16 worked as IPs and two worked as both an OP and an IP. An equal number of men (19) and women (19) participated in the study. The average age of the participants was 53 years old (SD: 10), with a range of 26 to 63 years. The OHPs had on average 21 years (SD: 9) work experience, with a range of 0.5 years to 35 years. Feasibility All participants completed the baseline questionnaire (T0) in May 2017. The T1 and T2 questionnaires were also completed by all participants and deployed on the day of the training programme, before the start of the programme (T1) and directly after completion of the training activities (T2). Both training programmes were held in June 2017. Acceptability Participants reported that the training programme increased their capability to use the guideline (mean: 7, SD: 1). The participants generally found that the training programme adhered to their daily practice (mean: 8, SD: 1) and was relevant to and useful in their guidance and assessment of people with a chronic disease (mean: 8, SD: 1). Finally, the OHPs indicated that the programme contributed to their knowledge and skills related to the guidance and assessment of people with a chronic disease (mean: 8, SD:1). Implementation The OHPs indicated that the training programme could be implemented on larger scale (mean: 7, SD: 1). However, various barriers to and facilitators of implementation on a large scale were reported. The barriers ‘time’ and ‘money’ were reported to hinder implementation. OHPs also reported that not all managers would give approval for them to undertake the training programme because of organizational constraints. Participant: “Managers won’t give permission for employees [occupational physicians or insurance physicians] to take a day off for this [the training programme]”. Some OHPs foresaw barriers in relation to the composition of the training programme group. They reported that the size of the group would hinder uptake, or foresaw difficulties with the inclusion of an equal number of OPs and IPs in each training programme group. They also reported that the training programme required active commitment, and that not all OHPs will be motivated to actively participate in the training programme. Barriers with respect to the content of the guideline were also reported, with some OHPs finding it difficult to read the guideline, or finding the evidence not applicable to every situation. It was also stated that in order for OHPs to use the evidence in practice, more familiarity with it is needed than is provided in a one-day training programme. Finally, several OHPs reported that they foresaw no barriers to the implementation of the training programme on a larger scale. Participant: “I don’t see any objections. This [the training programme] is essential for providing a rationale for the recommendations that are given”. A frequently reported facilitator was that OHPs were taught the relevance and value of the evidence included in the guideline, as some OHPs had trouble applying the theoretical evidence to their practice. The OHPs also reported that the evidence and training programme provided them with knowledge about and insight into factors and interventions applicable to a broad population. In addition, they reported that a training programme would improve and standardize the guidance and assessment of people with a chronic disease, and that it facilitated the use of knowledge and skills provided in the guideline. Participant: “It [the training programme] provides an extra opportunity to gain experience with the guideline. The more often you pick it up and read it, the easier it is to get to grips with.” Several OHPs reported that one facilitator of implementation would be the inclusion of both OPs and IPs, as this stimulates trainees to collaborate and learn to work towards one goal, which is optimizing the guidance and assessment of people with a chronic disease. Finally, one OHP suggested that receiving accreditation points would also be a facilitator. Participant: “It [the training programme] helps insurance physicians and occupational physicians to speak the same language, which helps improve the collaboration in occupational healthcare and reintegration.” Limited efficacy Tests scores on the knowledge and skills tests of the individual participants are displayed in Additional file . The non-parametric Friedman test showed a significant improvement in knowledge and skills over time (X 2 (2) = 53.656, p < 0.001), with the median score improving from 6.3 (T0, range: 2–11), to 8.3 (T1, range: 3–13.5), and 12.3 (T2, range: 6–15.5). Post-hoc analysis using the Wilcoxon signed rank test showed a significant improvement between T0 and T1 (p < 0.001), and between T1 and T2 (p < 0.001). A total of 38 participants joined the study, of which 20 worked as OPs, 16 worked as IPs and two worked as both an OP and an IP. An equal number of men (19) and women (19) participated in the study. The average age of the participants was 53 years old (SD: 10), with a range of 26 to 63 years. The OHPs had on average 21 years (SD: 9) work experience, with a range of 0.5 years to 35 years. All participants completed the baseline questionnaire (T0) in May 2017. The T1 and T2 questionnaires were also completed by all participants and deployed on the day of the training programme, before the start of the programme (T1) and directly after completion of the training activities (T2). Both training programmes were held in June 2017. Acceptability Participants reported that the training programme increased their capability to use the guideline (mean: 7, SD: 1). The participants generally found that the training programme adhered to their daily practice (mean: 8, SD: 1) and was relevant to and useful in their guidance and assessment of people with a chronic disease (mean: 8, SD: 1). Finally, the OHPs indicated that the programme contributed to their knowledge and skills related to the guidance and assessment of people with a chronic disease (mean: 8, SD:1). Implementation The OHPs indicated that the training programme could be implemented on larger scale (mean: 7, SD: 1). However, various barriers to and facilitators of implementation on a large scale were reported. The barriers ‘time’ and ‘money’ were reported to hinder implementation. OHPs also reported that not all managers would give approval for them to undertake the training programme because of organizational constraints. Participant: “Managers won’t give permission for employees [occupational physicians or insurance physicians] to take a day off for this [the training programme]”. Some OHPs foresaw barriers in relation to the composition of the training programme group. They reported that the size of the group would hinder uptake, or foresaw difficulties with the inclusion of an equal number of OPs and IPs in each training programme group. They also reported that the training programme required active commitment, and that not all OHPs will be motivated to actively participate in the training programme. Barriers with respect to the content of the guideline were also reported, with some OHPs finding it difficult to read the guideline, or finding the evidence not applicable to every situation. It was also stated that in order for OHPs to use the evidence in practice, more familiarity with it is needed than is provided in a one-day training programme. Finally, several OHPs reported that they foresaw no barriers to the implementation of the training programme on a larger scale. Participant: “I don’t see any objections. This [the training programme] is essential for providing a rationale for the recommendations that are given”. A frequently reported facilitator was that OHPs were taught the relevance and value of the evidence included in the guideline, as some OHPs had trouble applying the theoretical evidence to their practice. The OHPs also reported that the evidence and training programme provided them with knowledge about and insight into factors and interventions applicable to a broad population. In addition, they reported that a training programme would improve and standardize the guidance and assessment of people with a chronic disease, and that it facilitated the use of knowledge and skills provided in the guideline. Participant: “It [the training programme] provides an extra opportunity to gain experience with the guideline. The more often you pick it up and read it, the easier it is to get to grips with.” Several OHPs reported that one facilitator of implementation would be the inclusion of both OPs and IPs, as this stimulates trainees to collaborate and learn to work towards one goal, which is optimizing the guidance and assessment of people with a chronic disease. Finally, one OHP suggested that receiving accreditation points would also be a facilitator. Participant: “It [the training programme] helps insurance physicians and occupational physicians to speak the same language, which helps improve the collaboration in occupational healthcare and reintegration.” Limited efficacy Tests scores on the knowledge and skills tests of the individual participants are displayed in Additional file . The non-parametric Friedman test showed a significant improvement in knowledge and skills over time (X 2 (2) = 53.656, p < 0.001), with the median score improving from 6.3 (T0, range: 2–11), to 8.3 (T1, range: 3–13.5), and 12.3 (T2, range: 6–15.5). Post-hoc analysis using the Wilcoxon signed rank test showed a significant improvement between T0 and T1 (p < 0.001), and between T1 and T2 (p < 0.001). Participants reported that the training programme increased their capability to use the guideline (mean: 7, SD: 1). The participants generally found that the training programme adhered to their daily practice (mean: 8, SD: 1) and was relevant to and useful in their guidance and assessment of people with a chronic disease (mean: 8, SD: 1). Finally, the OHPs indicated that the programme contributed to their knowledge and skills related to the guidance and assessment of people with a chronic disease (mean: 8, SD:1). The OHPs indicated that the training programme could be implemented on larger scale (mean: 7, SD: 1). However, various barriers to and facilitators of implementation on a large scale were reported. The barriers ‘time’ and ‘money’ were reported to hinder implementation. OHPs also reported that not all managers would give approval for them to undertake the training programme because of organizational constraints. Participant: “Managers won’t give permission for employees [occupational physicians or insurance physicians] to take a day off for this [the training programme]”. Some OHPs foresaw barriers in relation to the composition of the training programme group. They reported that the size of the group would hinder uptake, or foresaw difficulties with the inclusion of an equal number of OPs and IPs in each training programme group. They also reported that the training programme required active commitment, and that not all OHPs will be motivated to actively participate in the training programme. Barriers with respect to the content of the guideline were also reported, with some OHPs finding it difficult to read the guideline, or finding the evidence not applicable to every situation. It was also stated that in order for OHPs to use the evidence in practice, more familiarity with it is needed than is provided in a one-day training programme. Finally, several OHPs reported that they foresaw no barriers to the implementation of the training programme on a larger scale. Participant: “I don’t see any objections. This [the training programme] is essential for providing a rationale for the recommendations that are given”. A frequently reported facilitator was that OHPs were taught the relevance and value of the evidence included in the guideline, as some OHPs had trouble applying the theoretical evidence to their practice. The OHPs also reported that the evidence and training programme provided them with knowledge about and insight into factors and interventions applicable to a broad population. In addition, they reported that a training programme would improve and standardize the guidance and assessment of people with a chronic disease, and that it facilitated the use of knowledge and skills provided in the guideline. Participant: “It [the training programme] provides an extra opportunity to gain experience with the guideline. The more often you pick it up and read it, the easier it is to get to grips with.” Several OHPs reported that one facilitator of implementation would be the inclusion of both OPs and IPs, as this stimulates trainees to collaborate and learn to work towards one goal, which is optimizing the guidance and assessment of people with a chronic disease. Finally, one OHP suggested that receiving accreditation points would also be a facilitator. Participant: “It [the training programme] helps insurance physicians and occupational physicians to speak the same language, which helps improve the collaboration in occupational healthcare and reintegration.” Tests scores on the knowledge and skills tests of the individual participants are displayed in Additional file . The non-parametric Friedman test showed a significant improvement in knowledge and skills over time (X 2 (2) = 53.656, p < 0.001), with the median score improving from 6.3 (T0, range: 2–11), to 8.3 (T1, range: 3–13.5), and 12.3 (T2, range: 6–15.5). Post-hoc analysis using the Wilcoxon signed rank test showed a significant improvement between T0 and T1 (p < 0.001), and between T1 and T2 (p < 0.001). This study examined whether a training programme is a feasible approach to facilitate OHPs’ use of knowledge and skills provided by a guideline. Regarding acceptability, OHPs found that the training programme increased their ability to use the knowledge and skills in daily practice, and they experienced the training programme as useful, relevant and as contributing to their work. The OHPs also indicated that the programme could be implemented on a larger scale, although they foresaw both barriers to and facilitators for implementation on larger scale. The barriers were mainly related to restrictions regarding ‘time’, ‘money’ and the OHPs’ organizational constraints, while the facilitators were related to the added value of the knowledge and skills regarding the guidance and assessment of people with a chronic disease. Also learning to apply the evidence in practice was mentioned as facilitator. Finally, with regard to limited-efficacy, the results showed that the OHPs’ knowledge and skills improved after completing the training programme. The opinions of the OHPs and their improvement in knowledge and skills highlight the need for a training programme to facilitate the use of knowledge and skills provided by the guideline. These results are congruent with other training programmes facilitating OHPs’ use of knowledge and skills provided by guidelines, including a training programme for IPs and a training programme for OPs . Both programmes have been found to contribute to OHPs’ abilities, with Zwerver et al. reporting improvements in IPs’ attitudes, self-efficacy and intention to apply the knowledge and skills provided by the guideline, while Joosen et al. reported significant improvements in knowledge, self-efficacy and motivation to use the knowledge and skills provided by the guideline. That the provision of a training programme can be an effective way of facilitating the use of knowledge and skills provided by a guideline has also been confirmed by Michie et al. , who indicated that increasing knowledge and skills can also increase capability (‘do OHPs know how to use the knowledge and skills?’) and thereby uptake of OHPs. To increase OHP’s capability, we primarily included training activities (e.g. role play, a case study or discussion of best practices) which reflected daily practice, focusing on learning through personal experience and the ability to discuss issues with peers. Research shows that this approach facilitates the integration of new knowledge and skills with OHPs’ current knowledge base, enhancing the OHPs’ application of knowledge and skills . Although the training programme primarily focused on increasing capability, our results showed that OHPs also found the training programme acceptable, relevant and of value to their work. This may indicate that ‘motivation’ (‘do OHPs believe the knowledge and skills benefit them in their guidance and do they want and plan to use the knowledge and skills?’) is also positively influenced by the programme. As the programme was developed in collaboration with OHPs to ensure that it matched their needs and preferences , this may have positively influenced OHPs’ motivation. With respect to implementation of the training programme on a larger scale, OHPs also reported various barriers and facilitators. These were in line with findings of previous studies, which showed that OHPs primarily reported barriers related to time, money and collaboration with others . Michie et al. includes barriers and facilitators under ‘opportunity’ (‘do OHPs have access to the knowledge and skills and are they supported to use them?’), one of the three conditions that are considered to facilitate uptake. Further implementation should therefore address the barriers and facilitators, as they can largely influence the uptake of the knowledge and skills provided by the guideline on a large scale . A strength of this study is that the training programme included both OPs and IPs. This was done because one of the learning objectives focused on improvement of collaboration between OPs and IPs in their support of people with a chronic disease participating in work. The inclusion of both professions in a training programme had not previously been done, but was perceived as highly beneficial according to our trainees. The OHPs reported this to be a facilitator of the implementation of the training programme, because it supported collaboration and provided the OHPs with the opportunity to learn from each other’s perspectives. Another strength is that we developed a training programme in collaboration with OHPs, in which we attempted to follow the principles of constructive alignment. By including OHPs in the development of the programme, we aimed to best match the training content and method to the needs of the OHPs, which has proved to positively influence adherence . Previous studies have reported that following the principles of constructive alignment facilitates the integration of knowledge and skills . By doing so, we endeavoured to develop a constructive programme facilitating the use of knowledge and skills by OHPs in daily practice. A limitation of this study is that we used a one-group pre-post design to measure the increase in knowledge and skills. We decided to not include a control group, as an important learning objective of this training was to stimulate collaboration of OPs and IPs. As OHPs experience a high work load and work in different settings, we decided that a pre-post design approach would serve both the participants, as would provide us with an answer if there is an increase in knowledge and skills. Although we cannot strictly rule out the influence of extraneous variables, we obtained our aim which was to yield trends in the predicted direction for better outcome as per Bowen , which is additionally confirmed by OHPs in their perspective on feasibility of the training programme. Future research however should include a control group to exclude the influence of other variables to measure the increase of knowledge and skills. In addition, the method used to measure knowledge and skills has its limitations, as the training programme and questions were not fully congruent with each other. The taxonomy developed by Bloom et al. classifies different levels of learning, ranging from ‘remembering information’ to the highest level of ‘creating new information’, with the individual being able to produce new information . The training programme primarily focused on applying knowledge, one of the higher levels of learning, whereas the questions used to measure knowledge mainly focused on remembering, the lowest level of learning . Although we attempted to include questions focusing on a higher level of learning by including questions related to skills based on a case study, we were not able to fully match the questions with the programme. We chose this method, as other approaches were not feasible in the chosen setting and time frame of the training programme. With regard to generalisability, there is a chance that the sample included more intrinsically motivated OHPs since they participated voluntary. However, as trainees received accreditation points (i.e. physicians need to acquire a certain number of accreditation points per year to obtain their registration as a physician) for participation, it is highly likely that many participants joined the training to acquire accreditation points. This means that the sample is likely to be a reflection of the entire population, including both physicians who are intrinsically motivated versus physicians who are primarily motivated by receiving accreditation points. Future research on implementation and evaluation of the training can expand insight by using a control group or by additional observation of OHPs, allowing us to explore the level of appliance and integration of knowledge and skills by OHPs in daily practice . In addition, the training programme was developed as a one-day programme to make it more feasible for OHPs to attend and to fit with their daily practice. As research shows that recall and use of knowledge and skills can diminish over time , it might be worth considering the addition of follow-up meetings aimed to increase the recall of OHPs. Further research might therefore also explore whether a training programme containing multiple sessions or including follow-up meetings is more effective while remaining a feasible approach for OHPs. This study evaluated the feasibility of a training programme to facilitate OHPs’ use of knowledge and skills provided by a guideline. The results of the study showed that OHPs considered the training programme to be feasible, and that the OHPs’ knowledge and skills increased after completing the training programme. Thus, the programme can serve as an approach to facilitate OHPs’ use of knowledge and skills provided by a guideline. Additional file 1: Includes data titled ‘individual scores of participants on T0, T1 and T2 (limited efficacy)’. To research if the training showed limited efficacy, we measured the increase in knowledge and skills of the participants, by administering knowledge and skills tests at baseline (T0), before the training (T1) and after the training (T2). The supplementary files shows the scores of all participants on each test (T0, T1 and T2). (DOCX 14 kb) |
Simple bone cyst with severe root resorption: a case report | bb953056-f619-4f08-83a5-8b1618e8b310 | 11866581 | Dentistry[mh] | According to the World Health Organization classification, a simple bone cyst (SBC) of the jaws is categorized as an intraosseous pseudocyst owing to the lack of epithelial lining . It is typically filled with serous or sanguinous fluid, or may be empty . SBC of the jaws was first described in 1929 . The diagnostic criteria, which were established in 1946 , include a single lesion devoid of epithelial lining, absence of infection, and a cystic cavity that is either empty or filled with only fluid or connective tissue . It has a predilection for the mandibular body and mandibular angle during the second and third decades of life, accounting for approximately 1% of cysts in the jaws . Various terms, such as traumatic bone cyst, extravasation cyst, or hemorrhagic bone cyst have been used to describe SBC. This diversity in nomenclature is attributed to its uncertain pathogenesis . Although the exact etiology remains unclear, SBCs are thought to be localized abnormalities that occur during normal bone remodeling or metabolic processes . SBC presents as a slow-growing, non-expansile lesion that is usually asymptomatic and typically identified during routine radiographic examinations. On radiography, SBC appears as a well-defined radiolucent lesion, with or without a sclerotic rim, and may have scalloped borders when extending between tooth roots. While these lesions typically do not affect the surrounding teeth, there have been rare reports of tooth displacement, resorption, and the loss of the lamina dura . Herein, we report an unusual case of an SBC associated with severe root resorption of the involved tooth in the left mandible. Additionally, we discuss potential explanations for this uncommon presentation, in conjunction with a review of the relevant literature. A 63-year-old woman was referred to our hospital with a cystic lesion in the left posterior mandible. The patient had been experiencing pain on chewing in the area around the lesion for the past 2 months. She had a medical history of hypertension but was otherwise healthy. Panoramic and periapical radiography showed a well-demarcated and unilocular radiolucent lesion in the left mandible with severe root resorption of the left mandibular second molar (Fig. -a, b). Cone-beam computed tomography revealed a well-defined, low attenuated lesion without bony expansion (Fig. -c, d). The lesion was separate from the inferior alveolar nerve, and displacement of the nerve canal was not observed. Intraoral examination of the left mandibular second molar revealed no evidence of dental caries or other abnormalities. The tooth responded positively to electric pulp testing, with no signs of mobility or sensitivity to percussion. These findings indicated that the lesion was not associated with periapical infection. Based on the clinical and radiographic findings, severe idiopathic root resorption with cystic degeneration was suspected. Benign odontogenic tumor was also suspected, but the radiologic features were unusual. Surgical exploration of the cystic lesion, with extraction of the second molar and histopathologic examination were performed. During surgical exploration, an empty space containing a small amount of fibrous tissue was discovered. The second molar was extracted, the space was curetted, and bone graft material was subsequently placed within the defect. Histopathological examination of the extracted tooth revealed marked external root resorption with the formation of osteodentin and presence of osteoclasts on the resorbed dentin surface (Fig. -a, b). The pulpal tissue showed signs of mild inflammation; reparative dentin deposition was evident. The tissue specimen obtained from the cavity revealed a band of fibro-collagenous tissue without an epithelial lining (Fig. -c). Additionally, amorphous eosinophilic calcified material and reactive new bone formation were observed (Fig. -d). The lesion healed without complications after surgery. Six months post-operatively, the missing teeth, including the left maxillary second molar, which had been extracted due to periodontitis, were restored with implants. A panoramic radiograph acquired 1 year after surgery showed normal bone regeneration within the lesion, without evidence of recurrence (Fig. ). To the best of our knowledge, only seven cases of SBC of the jaw associated with root resorption have been reported . One case involved a 52-year-old woman with an SBC associated with an impacted third molar in the right mandible, which showed significant loss of tooth structure . This case was similar to ours in that it presented with severe root resorption in a tooth associated with an SBC in a middle-aged woman. However, it was unclear whether the root resorption resulted from internal or external resorption. In contrast, our case demonstrated external root resorption confirmed by histological findings, suggesting an association with the cystic lesion beneath the tooth. Suei et al. reported root resorption in 5 of 31 SBC cases. Among these five cases with root resorption, four recurred after treatment. However, the extent of root resorption and detailed radiographic findings were not mentioned in that study. The clinical manifestation of SBC in older individuals often differs from that in younger individuals . Older patients tend to present with atypical features, such as the loss of lamina dura of the affected tooth or more frequent multiple cysts . These atypical SBCs are often associated with a higher recurrence rate . While the bony defect in this case healed uneventfully, atypical SBCs may require longer follow-up. Studies on root resorption in jaw lesions are limited. Among the odontogenic cysts or tumors, ameloblastoma is usually the lesion associated with root resorption, occurring in approximately 81% of cases . Root resorption is reported in 55% of dentigerous cysts, 36% of nasopalatine duct cysts, and 18% of radicular cysts . Despite their aggressive nature, the incidence of root resorption in odontogenic keratocysts is low . The mechanism of root resorption in jaw lesions is not fully understood and does not appear to be solely related to the lesion’s aggressiveness. One theory is that lesions originating from the dental follicle can lead to the resorb dental hard tissues , while another suggests that intra-cystic pressure-induced ischemia causes root resorption . However, SBC does not originate from the dental follicle, nor does it have significant intra-cystic pressure. Root resorption in SBC is likely to arise from a different mechanism. We also considered the possibility that idiopathic tooth resorption may progress aggressively and form a cystic space mimicking SBC. SBCs in middle-aged patients often occur in conjunction with benign fibro-osseous lesions such as cemento-osseous dysplasia (COD) . SBCs may arise from the progression of COD or from large cysts formed by coalescing microcysts in fibrous dysplasia . These findings indicate that fibro-osseous lesions might contribute to the occurrence of SBC. The amorphous eosinophilic calcified material in this case may represent an early sign of COD. However, typical histological features of COD were not observed in our case. Additionally, hypercementosis is more commonly seen than root resorption in COD, which is inconsistent with this case’s findings. This study presents a rare case of SBC associated with severe root resorption. The atypical presentation of SBC in older women, potential for recurrence, and need for longer follow-up periods highlight the importance of proper diagnosis and management of this condition. Although the mechanism by which root resorption occurs in SBC remains unclear, this case underscores the importance of considering SBC in the differential diagnosis of jaw lesions associated with root resorption. |
Women’s response regarding timing of genital surgery in congenital adrenal hyperplasia | a495c1ac-61c7-4217-bd5f-d5f2e6442499 | 11811264 | Surgical Procedures, Operative[mh] | Congenital adrenal hyperplasia (CAH) is a group of autosomal recessive disorders affecting the steroid synthesis. The far most common cause of CAH is 21-hydroxylase deficiency (21OHD) . In 21OHD cortisol and aldosterone concentrations are often low. In contrast, androgen and steroid precursor concentrations are high resulting in 46,XX neonates being born with variable degrees of virilized genitals. In the classic phenotypes, i.e., salt-wasting (SW) and simple virilizing (SV) CAH, the 46,XX neonates almost all have atypical genitals with variations. However, in the mild non-classic (NC) CAH most 46,XX neonates have normal genitals or just a mild clitoral enlargement . Children with 46,XX karyotype CAH and atypical genitals have been surgically adapted in female direction. Depending on the degree of virilization the surgical procedures have differed and included, clitoral reduction surgery (previously also clitoris amputation) and vaginal surgery. Vaginal surgery often include opening and mobilization of the common urethral and vaginal duct (i.e., the virilized urethra with insertion of the vagina on different levels) in addition to vulvoplasty with reconstruction of the introitus and labia . The aims are to create normal looking feminine external genitals and possibly stimulate adequate bladder emptying with no urinary incontinence or infections. Moreover, the aims are also to permit vaginal penetration in adulthood and a normal reproductive life . Traditionally, many surgeons have preferred, after parental informed consent, the genital reconstructive surgery to be performed early (2 nd to 6 th month or at least before 2 years of life). This is due to good tissue elasticity, to prevent potential hydrometrocolpos and to reduce parents’ and doctors’ distress . However, the timing of the genital surgery is controversial due to reported disappointing results and potential complications of surgery. Moreover, patient advocacy groups have argued that cosmetic surgery should only be performed with the individuals own informed consent . Major institutions such as United Nations special rapporteur on torture in 2013 and European council and parliament in resolution 2191 (2017) advocate from a human rights perspective for postponing genital surgeries if not vital for health until the individual can give own informed consent. A moratorium on early genital surgery in individuals with intersex conditions has been implemented in some countries and health care institutions . This is a sensitive topic, difficult to assess in an informative way since patients tend to be satisfied with the treatment they have received due to positive coping strategies. A reanalysis and investigation of previous collected data with the timing of surgery as an overarching question were performed. The aim was to assess the women’s response to the question on age for surgery in relation to their own treatment and surgical outcome regarding clitoris. All participating women with CAH were born 1940–1984 and examined during the years 2002–2005, as part of a larger follow-up study, including endocrine, as well as surgical aspects. Details concerning these can be found elsewhere . Among other things the women filled out a semi-structured questionnaire including questions if they had had genital surgery, type of surgery, their own thoughts about timing of genital surgery and experience of information about surgery. In English translation these questions were “How do you think the healthcare system should deal with children and women with the same illness as you have, concerning the timing of surgery? What is your experience of the information about surgery? Do you have any additional comments that you would like to pass on?” The medical and surgical files were reviewed. All had been diagnosed with 21OHD, clinically, biochemically and genetically. All were on glucocorticoid medications and the majority also on fludrocortisone ( n = 51, 82%). The cosmetic results including clitoris size at assessment were investigated. Early surgery was defined as genital surgery performed at 4 years of age or earlier and late at 10 years or later to capture the majority of women with CAH that had had surgery in the current study and 10 years is approximately the median age for start of puberty in girls. The possible responses to the question on thoughts on age at surgery were divided into early, during or after puberty and no opinion. E.g., a response that the surgery should be done as early as possible was grouped into the early group, a response during teenage years/puberty was grouped into the late group, those recommending specific ages, e.g,12–13 years, was group according to that, i.e., in this case the late group. Those with a response of “I don’t know” was group in the no opinion group. The responses regarding experience of the information about surgery were divided into mainly positive, mainly negative or no opinion. Statistical analysis Non-parametric statistical analysis was used. Continuous data were analyzed using Kruskal-Wallis test while categorical data were analyzed using the Chi2 test. A p -value < 0.05 was considered statistically significant. Non-parametric statistical analysis was used. Continuous data were analyzed using Kruskal-Wallis test while categorical data were analyzed using the Chi2 test. A p -value < 0.05 was considered statistically significant. In total 62 women with CAH with mean age 28 years (18–63) agreed to participate. The SW and SV phenotype were equally common (45% each) and only 10% had the NC form (Table ). The most common genotype groups were null (23%), I2G (24%) and I172N (40%). The mean age at first genital surgery was 3 years (0–28 years) in the 52 patients (84%) who had had genital surgery, with 60% had early surgery and 29% late. Before 1 year of age 4 (8%) had had genital surgery performed and 15 (29%) at 2 years or younger. The decade of surgery was from the 1950s to the 1990s and equal numbers had had clitoral and vaginal surgery (Table ). Almost half reported positive experience of the information about surgery while a third had no opinion and a fifth had negative experience. In the whole cohort 42% had no opinion regarding timing of genital surgery, early surgery was thought of as preferred by 39%, while 19% had the opinion that it should be done during or after puberty. Of those in favor of early surgery 70% had had early surgery themselves while among those in favor of late surgery 42% had had late surgery themselves. Almost all with genital surgery had an opinion about timing, while those who had not had genital surgery only 65% had an opinion (Table ). Those with an opinion favoring early surgery had had surgery earlier than the other two groups. Vaginal surgery was less common among those favoring early surgery. Age, phenotype, genotype, decade of surgery and experience of the information about surgery were similar between the three groups. The size of clitoris or their opinion about the size did not differ significantly between the groups. Of note, there were 8 patients who had had clitorectomy. In this retrospective study of adult women with CAH, the majority had had genital surgery, 60% early and 29% late. A large proportion of the whole cohort (42%) had no opinion about the timing of surgery. Those preferring early surgery (39%) had themselves had earlier surgery more often (70%) than those preferring surgery during or after puberty (42%) or those with no opinion. Of the 10 women who did not have any surgery 9 had no opinion. It should be noted that the data was collected 20 years ago before the present discussion and questioning of early genital surgery. Hence, the women in this study have not been exposed to the discussion from a human rights perspective and the thoughts about individual self-determination. Genital reconstructive surgery in 46,XX children with CAH and atypical genitals is complex and has become controversial . There is, however, mostly consensus that the decision regarding genital surgery and the timing should be done together with the parents and when possible the patient. Moreover, genital surgery should be performed after discussions in a multidisciplinary team . Genital surgery should further only be performed in centers with experienced pediatric surgeons/urologists, pediatric endocrinologists, pediatric anesthesiologists, behavioral/mental health professionals, and social work services . There is also at least to some extent consensus that surgery should be performed either early (the first year/years of life) or at puberty and also to be more restrictive in cases of mild virilization . It is today more common to wait for the effect of glucocorticoid treatment when the clitoris can be assumed to be less enlarged before a decision on surgery. This will hopefully in the future add knowledge about non-surgical treatment outcome. Previously many surgeons recommended early single-stage surgery to take advantage of the early estrogen effect and use tissue from the enlarged clitoris for the vulvoplasty with the argument that this relives the anxiety in the families . However, a recent report showed that there is persisting concerns regarding the 46,XX child’s genitalia regardless of surgery or not . The controversies have arisen since the genital surgery can be regarded as an esthetic procedure in a small child, with no benefit at that age for the child and with a risk of disturbing sensitivity in the clitoris later in life . Follow-up studies have shown a high percentage of secondary surgery later in life due to functional and esthetic reasons as judged by gynecologists or the patients themselves . In addition, the surgery is irreversible making the decision more controversial. Patient advocacy groups have argued that all genital surgery should be delayed until the child with atypical genitalia is able to give full informed consent and participate in the decision-making . This has resulted in a legal ban in conducting genital surgery for 46,XX children with atypical genitals in some countries such as Germany. Genital surgery have also been condemnated by organizations such as United Nations rapporteur on torture, Amnesty International, European Union Agency for Fundamental Rights, Human Rights Watch and Council of Europe Commissioner for Human Rights . However, the largest group representing patients with CAH in USA, the Congenital Adrenal Hyperplasia Research Education and Support Foundation (CARES), opposes a ban on early surgery . It should be noted that, in recent studies, most parents favor early surgery with very few exceptions, even to a larger extent than the women themselves . Data on what the women with CAH think regarding timing of surgery, are more scarce. In a Finnish study of 24 women (CAH n = 16) who had had genital surgery in childhood (median age 2.1 years, range 0.4–14.8), none thought the procedure had been performed too early and 17 believed that surgery had been done at a proper age . Of note, 3 women with CAH thought the surgery had been done too late (had been performed at 9, 14 and 17 years, respectively). In a French study 21 adult women with CAH and controls were included. Of the women with CAH 90% (100% of those with early surgery and 80% of those with late surgery) considered early surgery before 1 year of age as the preferred option while only 52% of controls thought so . In a Malaysian study 59 girls/women with CAH aged 10–28 years were recruited. Of these 51% did not respond to the question about timing of surgery while 36% recommended early surgery and 13% late . In a UK survey study including 12 women with CAH of whom 8 had had surgery the preference was for early surgery . In a pan-European study including 226 adult women with CAH only 144 responded, i.e., 36.3% probably had no opinion on the question about timing of genital surgery. Among those responding 76% favored early, 10% late and 14% were indifferent . Similarly, we found in the current study that the majority of adult women with CAH with an opinion recommended early surgery. Of those with a personal opinion favoring early surgery 70% had had early surgery themselves while in those favoring late surgery 42% had had late surgery. Thus, those who had had early surgery were more prone to advocate early surgery which may be a reflection of good coping strategies. It is an inherent difficulty for this type of studies that the patients with good coping will come to terms with their situation. It should be noted though that 42% of our adult women with CAH had no opinion on the timing of the surgery. 46,XX children with CAH and Prader 4–5 are usually not separated from Prader 1–3 in similar studies, such as the current study, possibly due to a limited number of participants. It is likely that much unnecessary surgery has been performed in girls and women with Prader 1–3. Patients who have undergone genital surgery are unable to compare their experience to what it might have been like without the procedure. Very little data exist on how the girls perceive growing up with a mild to moderate clitoromegaly and how much genital surgery is needed if delaying until puberty. The results after genital surgery in puberty are likely better both esthetically and functionally. It should be noted that it is difficult to reconstruct in an area with scars and strictures. Postponing surgery is of course most difficult in Prader 3–5 and severe genetic variants where there is risk of psychological burden growing up if no genital surgery has been performed. However, we cannot be certain what these individuals would have preferred if they had been able to consent before early surgery. Even more scarce are studies on experiences concerning living with clitoromegaly. In this group of women that were not operated, 54% regarded their clitoris as “too large” and among the operated women 66% regarded the clitoris to be of normal size . Earlier studies interviewing women with CAH gave many examples of the stigma felt regarding a large clitoris . Recently an on-line survey answered by 97 women with CAH reported that they recognized the clitoromegaly at a median age of 11–13 years and that there were no positive effects . The condition affected many activities like sports and less wish for tight clothing or changing clothes in public locker rooms. They also experienced poor self-esteem, gender self-perception and body image. Another argument against early surgery is the risk of future change of gender identity. However, the great majority of 46,XX children with CAH raised as girls will not develop gender dysphoria in adulthood . Having said that, many will develop a non-heterosexual orientation which is associated with the severity of the genotype . Limitations Even though this was a fairly large study of adult women with CAH, the number of participants was still limited which prevented us from doing subgroup analysis of the different pheno- and genotypes. There is, however, an inherent bias in that those with more severe virilization mostly are the ones with early operation. Even so, many women with lower degrees of virilization were also operated early. Thus, we have no knowledge of how external genitalia change during female puberty and if the natural development possibly creates a better functional and esthetic result than surgery. Moreover, a large proportion did not have a clear opinion making the groups with a personal opinion on early or late surgery even smaller. The data was collected 20 years ago, before the discussion and questioning of early genital surgery. Hence, the women in this study have not been exposed to the discussion from a human rights perspective and the thoughts about individual self-determination. This makes it difficult to draw conclusions regarding the present situation. Even though this was a fairly large study of adult women with CAH, the number of participants was still limited which prevented us from doing subgroup analysis of the different pheno- and genotypes. There is, however, an inherent bias in that those with more severe virilization mostly are the ones with early operation. Even so, many women with lower degrees of virilization were also operated early. Thus, we have no knowledge of how external genitalia change during female puberty and if the natural development possibly creates a better functional and esthetic result than surgery. Moreover, a large proportion did not have a clear opinion making the groups with a personal opinion on early or late surgery even smaller. The data was collected 20 years ago, before the discussion and questioning of early genital surgery. Hence, the women in this study have not been exposed to the discussion from a human rights perspective and the thoughts about individual self-determination. This makes it difficult to draw conclusions regarding the present situation. In this reanalysis of data collected more than 20 years ago, before the questioning of early genital surgery, the timing of surgery was the overarching question. The majority of women with CAH had had genital surgery but almost half (42%) of all participants had no opinion on the timing of surgery. Most of those with an opinion thought that early surgery was preferred. Their preferences were often in accordance with their own age for surgery. This is an extremely difficult subject and for the future it is important to especially follow up the group of non-operated girls with CAH. |
Outcomes of cataract surgery training among ophthalmology trainees in the independent sector and within the NHS | e8ecb17a-ab37-4b44-822e-679bf8be9b4e | 11795364 | Ophthalmology[mh] | A structured comprehensive training programme within the independent sector (IS) allowed two trainees to gain exposure to numerous cataract cases of varying complexity while ensuring patient safety and maintaining surgical efficiency. The training outcomes are comparable to NHS training lists and the national UK cataract surgery audit standards, indicating that IS training can safely supplement traditional NHS-based training. Our study demonstrates the potential for integrating high-quality cataract surgery training within high-volume lists in both IS and NHS sectors, maintaining patient safety and efficiency comparable to NHS standards. These pilot results could inform education policymakers on strategies to enhance ophthalmic training for the future generation of cataract surgeons. Further large-scale research is needed to validate these findings and guide policy for broader adoption. In recent years, there has been a dramatic change in the way NHS-funded cataract operations are performed in England. A significant proportion of cases are now being performed by independent sector (IS) providers, a figure which stood at only 17% in 2016 has skyrocketed to 83% in 2021. This drastic change in service delivery along with the impact of the COVID-19 lockdown has compounded the lack of surgical training opportunities for ophthalmology trainees, with the recent Royal College of Ophthalmologists (RCOphth) National Ophthalmology Database (NOD) report showing the number of cataract surgeries performed by experienced trainees was half that of 2019. A recent UK regional trainee survey revealed 56% of the trainees observed a decrease in cataract training over the past 2 years and only 14% of senior trainees felt confident in managing high volume cataract lists —defined as 10 patients per list with low surgical complexity and no anaesthetist presence. This was echoed in the recent GMC National Training survey showing only 59% of trainees felt they were on track to undertake the number of procedures required at their stage of training. Furthermore, recent UK studies have highlighted the need for increased experience and training in managing cataract surgical complications such as posterior capsular rupture. Recognising the urgent need to safeguard the training of future generations of cataract surgeons, the RCOphth published a position statement committing to rapidly increasing access to surgical training in the IS. In March 2022, NHS England’s Cataract Specification mandated that every IS delivering NHS-funded cataract surgery must train NHS ophthalmic trainees on at least 11% of cases within 2 years in every region. The RCOphth also produced a ‘Cataract Training in IS’ blueprint to help IS providers and trainers facilitate appropriate and safe cataract training. Despite these efforts, only 7 out of 18 training regions have provided IS training opportunities, and only 6% of trainees have accessed these opportunities. Potential barriers from both trainees and trainers have been identified, particularly regarding the lack of systematic guidance and implementation of such training, and concerns about maintaining training quality and patient safety on high-volume lists. A recent comment by The Ophthalmologists Training Group called for a review of the standard of training delivered in the IS. Additionally, an editorial by the RCOphth Chair of the Training Committee highlighted the need for a collaborative approach between IS and NHS in delivering training, emphasising that clear standards should be established regarding both the volume and quality of training provided. To the best of our knowledge, cataract surgery training outcomes in the IS have not been evaluated or reported in the literature. The Northern Ophthalmology Deanery is one of the first few in the UK to collaborate with the IS to deliver a structured cataract training programme for ophthalmology trainees. The primary aim of this study is to provide a detailed analysis of cataract surgery training outcomes within the IS and explore its potential as a supplement to traditional NHS-based training. By comparing IS training outcomes with those from routine NHS training and national standards based on the NOD database, this study aims to evaluate whether IS training can be integrated safely without compromising training quality or patient safety. This prospective study was conducted within the South Tees Rotation, where the IS training programme was initiated. Two ophthalmology trainees, one in specialty training year 3 (ST3) and the other in specialty training year 5 (ST5), were selected for participation based on having the lowest number of completed cataract surgeries within the rotation (58 cases for the ST3 and 200 cases for the ST5). The selection was carried out by the college tutor and the training programme director, ensuring an independent choice of trainees, without influence from the trainers. Over 6 months, these two trainees were allowed to undertake supervised cataract surgery training in the high-volume IS lists, while continuing their routine cataract surgery training within the local NHS lists. The primary outcome measures included the number of cataract surgeries performed, surgical complication rates and compliance with NOD standards. IS and NHS setting IS cataract surgery training was conducted at the Teesside NewMedica Surgical Centre (Middlesbrough, UK). The training sessions were high-volume, typically involving 12 patients per session with varying case complexities. NHS cataract training was conducted at the James Cook University Hospital (Middlesbrough, UK). The NHS training reflected real-world scenarios, with the two trainees attending a mix of standard cataract lists (usually six patients per session) and combined theatre lists that included other procedures, such as vitreoretinal cases. This varied exposure was based on their allocated timetables and represented typical NHS training opportunities. Both trainees participated in RCOphth approved training programmes, following a standardised training pathway. This included mandatory attendance at the RCOphth microsurgical skills course and completion of appropriate work-based assessments related to cataract surgery each year. The training emphasised regular use of simulation tools, such as the Eyesi simulator within both settings. All surgical cases and complications were meticulously recorded in the Eyelogbook, and continuous complication and simulation logs were maintained for regular review during appraisals to assess training progression. Both trainees had prior experience in cataract surgery within the NHS, and all surgical trainers involved were RCOphth approved. In addition to the RCOphth requirements, the IS training programme included further recommendations to prepare trainees for high-volume lists. The two trainees were required to complete the Royal College of Edinburgh Non-technical Skills for Surgeons Course before starting the IS training programme. This theoretical learning was enhanced through scenarios simulating crises, real-time communication skills assessment and opportunities for trainees to lead the scrub nurse role, allowing them to understand the human factors and ergonomics crucial for safe high-volume cataract surgery. Modular cataract training was recommended for both trainees, regardless of prior experience, to help them adapt to the flow of high-volume surgery (see and ). This approach ensured consistent ‘touch-time’, allowing trainees to hone their surgical skills in a focused manner while maintaining patient safety. Additionally, the IS training programme encouraged using the International Council of Ophthalmology (ICO) cataract surgery competency framework, along with video analysis of surgical techniques, to objectively monitor trainee progression in every case and facilitate constructive feedback. Consent within both IS and NHS settings was obtained according to RCOphth guidelines. Patients were informed that, while they do not have the right to choose the name or designation of their surgeon, they can expect that the surgeon will have the adequate experience and skill to perform their surgery. Specifically, on arrival in the IS setting, patients were informed about the presence of a trainee surgeon, the possibility of the surgery being performed by a trainee under supervision and that their feedback on the experience may be collected anonymously post-surgery. This ensured patient agreement with the process. Data collection Relevant data on case numbers, case complexity (using adapted RCOphth risk stratification scoring system —see ), need for assistive equipment, take-over rate and intraoperative complications for both NHS and IS lists were prospectively collected and recorded using the surgical Eye logbook. Postoperative patient reviews followed local department guidelines. Postoperative data on vision and complications were collected retrospectively from the medical notes in both settings. Analysis was conducted based on cases with available face-to-face postoperative review data, either from community optician review (IS) or nurse-led clinic review (NHS). Within the IS setting, patient feedback was collected using anonymised questionnaires completed immediately following surgery. This feedback aimed to evaluate patients’ perceptions and experiences of being operated on by the two trainees in the IS, an environment where patients might not expect the presence of trainees. National standard The following cataract surgery outcome standards were extracted from the most recent RCOphth NOD Audit data published in 2023. Intraoperative rate of complication for experienced trainee surgeons (defined as ST3–ST7)—4.2%. Posterior capsule rupture (PCR) rate for experienced trainee surgeons—1.9%. Post-operative corrected-distance-visual-acuity of 6/12 or better achieved in 91% of eyes overall. Statistical analysis Statistical analysis was performed using SPSS V.26.0 (IBM SPSS Statistics for Windows, Armonk, New York, USA). Comparison between groups was conducted using Pearson’s χ 2 or Fisher’s exact test where appropriate for categorical variables, and t-test or Mann-Whitney U test for continuous variables. All continuous data were presented as mean±SD. Where multiple comparisons were undertaken between two groups, an adjusted p value was calculated based on the number of tests (Bonferroni correction) in addition to a non-adjusted p value. A non-adjusted p value was also included as this is an exploratory study. We performed sample size calculation based on the primary outcome (the proportion of patients achieving a postoperative visual acuity of ≥6/12) between the NHS and IS groups using the G*Power software. To detect a 20% proportion difference in the primary outcome (with power=0.80 and α=0.05), a sample size of 138 patients is required. In our study, we included a total of 146 patients with postop visual outcomes, allowing meaningful comparisons of visual outcome (and complication rate) between the two groups. Patient and public involvement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. IS cataract surgery training was conducted at the Teesside NewMedica Surgical Centre (Middlesbrough, UK). The training sessions were high-volume, typically involving 12 patients per session with varying case complexities. NHS cataract training was conducted at the James Cook University Hospital (Middlesbrough, UK). The NHS training reflected real-world scenarios, with the two trainees attending a mix of standard cataract lists (usually six patients per session) and combined theatre lists that included other procedures, such as vitreoretinal cases. This varied exposure was based on their allocated timetables and represented typical NHS training opportunities. Both trainees participated in RCOphth approved training programmes, following a standardised training pathway. This included mandatory attendance at the RCOphth microsurgical skills course and completion of appropriate work-based assessments related to cataract surgery each year. The training emphasised regular use of simulation tools, such as the Eyesi simulator within both settings. All surgical cases and complications were meticulously recorded in the Eyelogbook, and continuous complication and simulation logs were maintained for regular review during appraisals to assess training progression. Both trainees had prior experience in cataract surgery within the NHS, and all surgical trainers involved were RCOphth approved. In addition to the RCOphth requirements, the IS training programme included further recommendations to prepare trainees for high-volume lists. The two trainees were required to complete the Royal College of Edinburgh Non-technical Skills for Surgeons Course before starting the IS training programme. This theoretical learning was enhanced through scenarios simulating crises, real-time communication skills assessment and opportunities for trainees to lead the scrub nurse role, allowing them to understand the human factors and ergonomics crucial for safe high-volume cataract surgery. Modular cataract training was recommended for both trainees, regardless of prior experience, to help them adapt to the flow of high-volume surgery (see and ). This approach ensured consistent ‘touch-time’, allowing trainees to hone their surgical skills in a focused manner while maintaining patient safety. Additionally, the IS training programme encouraged using the International Council of Ophthalmology (ICO) cataract surgery competency framework, along with video analysis of surgical techniques, to objectively monitor trainee progression in every case and facilitate constructive feedback. Consent within both IS and NHS settings was obtained according to RCOphth guidelines. Patients were informed that, while they do not have the right to choose the name or designation of their surgeon, they can expect that the surgeon will have the adequate experience and skill to perform their surgery. Specifically, on arrival in the IS setting, patients were informed about the presence of a trainee surgeon, the possibility of the surgery being performed by a trainee under supervision and that their feedback on the experience may be collected anonymously post-surgery. This ensured patient agreement with the process. Relevant data on case numbers, case complexity (using adapted RCOphth risk stratification scoring system —see ), need for assistive equipment, take-over rate and intraoperative complications for both NHS and IS lists were prospectively collected and recorded using the surgical Eye logbook. Postoperative patient reviews followed local department guidelines. Postoperative data on vision and complications were collected retrospectively from the medical notes in both settings. Analysis was conducted based on cases with available face-to-face postoperative review data, either from community optician review (IS) or nurse-led clinic review (NHS). Within the IS setting, patient feedback was collected using anonymised questionnaires completed immediately following surgery. This feedback aimed to evaluate patients’ perceptions and experiences of being operated on by the two trainees in the IS, an environment where patients might not expect the presence of trainees. The following cataract surgery outcome standards were extracted from the most recent RCOphth NOD Audit data published in 2023. Intraoperative rate of complication for experienced trainee surgeons (defined as ST3–ST7)—4.2%. Posterior capsule rupture (PCR) rate for experienced trainee surgeons—1.9%. Post-operative corrected-distance-visual-acuity of 6/12 or better achieved in 91% of eyes overall. Statistical analysis was performed using SPSS V.26.0 (IBM SPSS Statistics for Windows, Armonk, New York, USA). Comparison between groups was conducted using Pearson’s χ 2 or Fisher’s exact test where appropriate for categorical variables, and t-test or Mann-Whitney U test for continuous variables. All continuous data were presented as mean±SD. Where multiple comparisons were undertaken between two groups, an adjusted p value was calculated based on the number of tests (Bonferroni correction) in addition to a non-adjusted p value. A non-adjusted p value was also included as this is an exploratory study. We performed sample size calculation based on the primary outcome (the proportion of patients achieving a postoperative visual acuity of ≥6/12) between the NHS and IS groups using the G*Power software. To detect a 20% proportion difference in the primary outcome (with power=0.80 and α=0.05), a sample size of 138 patients is required. In our study, we included a total of 146 patients with postop visual outcomes, allowing meaningful comparisons of visual outcome (and complication rate) between the two groups. Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. Placement details During the 11-month study period, a total of 161 full cataract surgeries were performed in the IS, compared with 62 in the NHS . On average, the two trainees actively participated in approximately 46% of cases per IS list, translating to about five to six cases per 12-patient session. In contrast, the two trainees participated in approximately 41% of cases per NHS list, which averaged about two to three cases per six-patient session . Both NHS and IS placements involved comparable numbers of theatre lists attended (NHS: n=31, IS: n=32). The two trainees also assisted in 43 cases in IS. The term ‘assisting’ in this context refers to trainees being involved in modular cataract training at the beginning of the IS programme, where they performed specific parts of the surgery under supervision, while the consultant completed the remainder. This approach aimed to accelerate trainee skill development while maintaining patient safety and surgical efficiency on high-volume lists. Operative details Within IS lists, 43% of cases had at least one complex risk factor which was comparable to 35% of the cases in NHS (p=0.32, adjusted p=1) . The proportion of patients with each complexity risk factor and overall complexity grading within each setting are provided in . There was no significant difference in the mean adapted RCOphth risk stratification score between IS and NHS cases (1.53±0.78 vs 1.36±0.59; p=0.1, adjusted p=1) . The rate of additional equipment used was similar between IS and NHS lists (14% vs 13%; p=0.88, adjusted p=1) ( and ). The overall rate of intraoperative take-over by a senior surgeon was 7% in IS which was also comparable to 6% in NHS (p=0.919, adjusted p=1). In the IS setting, all cases were performed with topical and intracameral anaesthesia. In contrast, the NHS setting used a mix of topical and subtenon local anaesthesia based on the trainer’s preference. The decision of which cases trainees would operate on was made by the trainer based on a multitude of factors aimed at optimising training outcomes and maintaining patient safety. These factors included the trainee’s current skill level, the specific complexity of the case and the need to ensure the efficiency of high-volume lists. This approach was mirrored in the NHS setting, ensuring a fair and balanced exposure to different case complexities for the trainees across both training environments. Outcomes Overall, there was no significant difference in the intraoperative complication rate between the two settings (3% IS vs 5% NHS, p=0.53, adjusted p=1) . Within the IS setting, there were two cases of PCR, one case of iris prolapse, one case of anterior capsule tear and one case of fractured IOL haptic with successful IOL exchange performed by the trainee under supervision. Within the NHS setting, there was one case of iris prolapse and two cases of anterior capsule tear. There was also no significant difference in the PCR rate between the two settings considering the volume of cases performed (1.2% IS vs 0% NHS; p=0.38, adjusted p=1) . Postoperative details There were 60% of IS cases (n=97) and 79% of NHS cases (n=49) with available postoperative assessment reports including postoperative vision (from an optician or nurse-led clinic). Postoperative outcome data were less comprehensive in the IS setting because uncomplicated second-eye surgeries with no comorbidities were followed up only with a nurse-led telephone consultation on day 1 and a community optician review at 4 weeks, per Get It Right First Time (GIRFT) guidelines. Therefore, all postoperative outcomes were measured based on cases with available vision data, and notes were scanned for optician referrals from postoperative reviews highlighting complications, and these cases were also included in the calculations. Postoperative complication rates (6% IS vs 4% NHS; p=0.60, adjusted p=1) were similar in both settings . Within IS, there were four cases of postoperative cystoid macula oedema (CMO), one case of postoperative anterior uveitis and one case of corneal oedema, all cases resolved with topical treatment. In NHS, there were two cases of postoperative CMO, both of which resolved with topical medication. Postoperative vision≥6/12 was achieved in a significantly higher proportion of IS cases compared with NHS cases, (97% IS vs 73% NHS; p<0.001, adjusted p<0.001 ; however, NHS cases have a slightly higher proportion of comorbidity (42% IS vs 53% NHS; p=0.22, adjusted p=1) . When cases with comorbidity were excluded from postoperative vision analysis, there was a similar proportion of eyes achieving≥6/12 in both settings (100% IS vs 96% NHS, p=0.12, adjusted p=1). Anonymised patient feedback surveys were conducted immediately post-surgery on a total of 20 patients on both trainees’ lists within the IS. All (100%) patients agreed that it was a positive experience, and they would be happy for the same trainee doctor to operate on them again. During the 11-month study period, a total of 161 full cataract surgeries were performed in the IS, compared with 62 in the NHS . On average, the two trainees actively participated in approximately 46% of cases per IS list, translating to about five to six cases per 12-patient session. In contrast, the two trainees participated in approximately 41% of cases per NHS list, which averaged about two to three cases per six-patient session . Both NHS and IS placements involved comparable numbers of theatre lists attended (NHS: n=31, IS: n=32). The two trainees also assisted in 43 cases in IS. The term ‘assisting’ in this context refers to trainees being involved in modular cataract training at the beginning of the IS programme, where they performed specific parts of the surgery under supervision, while the consultant completed the remainder. This approach aimed to accelerate trainee skill development while maintaining patient safety and surgical efficiency on high-volume lists. Within IS lists, 43% of cases had at least one complex risk factor which was comparable to 35% of the cases in NHS (p=0.32, adjusted p=1) . The proportion of patients with each complexity risk factor and overall complexity grading within each setting are provided in . There was no significant difference in the mean adapted RCOphth risk stratification score between IS and NHS cases (1.53±0.78 vs 1.36±0.59; p=0.1, adjusted p=1) . The rate of additional equipment used was similar between IS and NHS lists (14% vs 13%; p=0.88, adjusted p=1) ( and ). The overall rate of intraoperative take-over by a senior surgeon was 7% in IS which was also comparable to 6% in NHS (p=0.919, adjusted p=1). In the IS setting, all cases were performed with topical and intracameral anaesthesia. In contrast, the NHS setting used a mix of topical and subtenon local anaesthesia based on the trainer’s preference. The decision of which cases trainees would operate on was made by the trainer based on a multitude of factors aimed at optimising training outcomes and maintaining patient safety. These factors included the trainee’s current skill level, the specific complexity of the case and the need to ensure the efficiency of high-volume lists. This approach was mirrored in the NHS setting, ensuring a fair and balanced exposure to different case complexities for the trainees across both training environments. Overall, there was no significant difference in the intraoperative complication rate between the two settings (3% IS vs 5% NHS, p=0.53, adjusted p=1) . Within the IS setting, there were two cases of PCR, one case of iris prolapse, one case of anterior capsule tear and one case of fractured IOL haptic with successful IOL exchange performed by the trainee under supervision. Within the NHS setting, there was one case of iris prolapse and two cases of anterior capsule tear. There was also no significant difference in the PCR rate between the two settings considering the volume of cases performed (1.2% IS vs 0% NHS; p=0.38, adjusted p=1) . There were 60% of IS cases (n=97) and 79% of NHS cases (n=49) with available postoperative assessment reports including postoperative vision (from an optician or nurse-led clinic). Postoperative outcome data were less comprehensive in the IS setting because uncomplicated second-eye surgeries with no comorbidities were followed up only with a nurse-led telephone consultation on day 1 and a community optician review at 4 weeks, per Get It Right First Time (GIRFT) guidelines. Therefore, all postoperative outcomes were measured based on cases with available vision data, and notes were scanned for optician referrals from postoperative reviews highlighting complications, and these cases were also included in the calculations. Postoperative complication rates (6% IS vs 4% NHS; p=0.60, adjusted p=1) were similar in both settings . Within IS, there were four cases of postoperative cystoid macula oedema (CMO), one case of postoperative anterior uveitis and one case of corneal oedema, all cases resolved with topical treatment. In NHS, there were two cases of postoperative CMO, both of which resolved with topical medication. Postoperative vision≥6/12 was achieved in a significantly higher proportion of IS cases compared with NHS cases, (97% IS vs 73% NHS; p<0.001, adjusted p<0.001 ; however, NHS cases have a slightly higher proportion of comorbidity (42% IS vs 53% NHS; p=0.22, adjusted p=1) . When cases with comorbidity were excluded from postoperative vision analysis, there was a similar proportion of eyes achieving≥6/12 in both settings (100% IS vs 96% NHS, p=0.12, adjusted p=1). Anonymised patient feedback surveys were conducted immediately post-surgery on a total of 20 patients on both trainees’ lists within the IS. All (100%) patients agreed that it was a positive experience, and they would be happy for the same trainee doctor to operate on them again. Cataract surgical skills are a core competency in ophthalmology training globally. In the UK, trainees are expected to complete a minimum of 350 phacoemulsification cataract surgeries by the end of a 7-year programme, with recent guidance from RCOphth) and GIRFT recommending at least 10 cases per list for experienced trainees. However, as more cataract surgeries are outsourced to the IS, trainees face reduced access to high-volume training opportunities, raising concerns about the sustainability of workforce development. This pilot study is the first to report ophthalmology trainees’ cataract surgery experiences and outcomes within the IS. A primary concern among trainers and trainees is the potential for increased complications in high-volume lists. Our findings show no significant difference in senior take-over rates or intraoperative complication rates between IS and NHS settings, with IS complication rates lower than the national NOD standard for experienced trainees. Postoperative complications were also comparable between the two settings, with visual outcomes meeting NOD standards. Surgical efficiency was maintained across both IS and NHS lists, with all lists completed within 4 hours, despite higher patient volumes in IS settings. Trainees were exposed to a variety of case complexities, as indicated by RCOphth risk stratification, and had opportunities to use assistive surgical devices. These findings suggest that high-quality cataract training can be provided within high-volume lists, with outcomes comparable to standard volume lists. Patient perceptions of trainee involvement, traditionally a concern in IS settings, were addressed through patient feedback. The majority of patients were satisfied with their surgical experience and the communication skills of their surgeons. Due to logistical constraints, similar patient surveys were not conducted within the NHS setting. In the NHS, patient feedback mechanisms are already established and routinely collected through various means, not specifically tied to trainee involvement. For future studies, incorporating parallel patient feedback collection in both IS and NHS settings would provide a more comprehensive evaluation of patient perceptions and enhance the comparative analysis of training environments. The IS training programme examined in this study incorporates real-time feedback and progress monitoring through the ICO cataract surgery competency framework, supplemented by video recordings. This structured approach ensures comprehensive skill development and supports an integrated training model across IS and NHS settings. Furthermore, the programme prioritises non-technical skills (NTS) training, including communication, teamwork, and crisis management, which are increasingly recognised as critical to surgical safety. The positive trainee feedback and the maintenance of surgical efficiency underscore the benefits of this holistic training approach on high-volume lists. While this pilot study is limited by the small number of trainees and the higher volume of cases in IS compared with the NHS, it provides an accurate representation of current training conditions. Despite these constraints, the inclusion of over 200 cases of varying complexity offers a robust assessment of surgical training outcomes. The issue of selective case inclusion in IS centres, as noted in prior studies, requires ongoing investigation. However, no significant differences in patient characteristics or case complexity were observed between IS and NHS settings in this study. We also acknowledge the limitation of multiple testing within this exploratory study and, to aid interpretation, have provided both non-adjusted p values and adjusted p values throughout. Importantly, as a pilot study, these findings serve as a preliminary assessment. A larger, multicentre study is required before widespread adoption of cataract surgery training in the IS that can be recommended. Such a study would provide more definitive data on training outcomes and help establish a sustainable model for integrating IS-based training with traditional NHS training pathways. Overall, our study highlights the potential for delivering high-quality cataract surgery training within high-volume lists in both IS and NHS sectors. With structured training programmes, regular assessments, and a focus on NTS, IS training has the potential to supplement traditional NHS-based training, ensuring patient safety and maintaining training quality. Further research is needed to examine training outcomes across different settings and explore strategies to enhance ophthalmology training for future generations. 10.1136/bmjophth-2024-001716 online supplemental table 1 10.1136/bmjophth-2024-001716 online supplemental table 2 10.1136/bmjophth-2024-001716 online supplemental table 3 10.1136/bmjophth-2024-001716 online supplemental figure 1 |
ChatGPT (GPT-4) versus doctors on complex cases of the Swedish family medicine specialist examination: an observational comparative study | 4b1a1813-952e-4274-aab2-114a2df9f243 | 11683950 | Family Medicine[mh] | Artificial intelligence (AI) in medicine has been the subject of increasing research, even though real-world applications are relatively few. Over the last few years, large AI models called generative pretrained transformers (GPT) have demonstrated remarkable abilities beyond simple text generation, such as answering questions and participating in chat conversations. ChatGPT from OpenAI is arguably one of the most well-known models. At the time of this study, their two latest AI models are GPT-3.5 and GPT-4, with GPT-4 being the most advanced. Countless clinical applications could be envisioned for an AI system that can accurately answer questions from healthcare staff and patients. The impact could be enormous in primary healthcare, where healthcare staff need to keep themselves up-to-date on a broad spectrum of medical conditions. GPT-3.5 and GPT-4 have demonstrated human-level performance on several professional benchmarks and achieved moderate to excellent results in various medical examinations but did not pass the general practice licensing examinations of Taiwan and the UK. However, the medical questions in these assessments have typically been multiple-choice questions, which differ from a clinician asking the chatbot for advice on managing real patient cases. Additionally, the studies focusing on general practice have tested GPT-3.5, which may perform significantly worse than GPT-4. At the time of writing, research has not explored GPT-4’s ability to provide comprehensive free-text assessments of primary care cases. The Swedish family medicine specialist examination is not mandatory, but it is a valuable credential taken by resident doctors in general medicine as they become certified specialists. One part of the examination is a written test with eight complex cases that often involve intricate symptoms combined with social or behavioural factors, requiring comprehensive long-form responses. Our research question investigates how GPT-4 performs in comparison to real doctors taking the examination. Study design This study compared the performance of GPT-4 with responses from human doctors on cases from the Swedish family medicine specialist examination. The responses from three distinct groups were scored and compared: (A) randomly selected doctor responses, (B) top-tier doctor responses and (C) responses generated by GPT-4. Objective and outcome measures The objective was to compare GPT-4 to real doctors, regarding their ability to write comprehensive assessments of complex cases from primary care. Primary outcome measure The mean difference in scores between GPT-4 and randomly selected responses by human doctors, as well as between GPT-4 and top-tier responses. Secondary outcome measures The correlation between differences in response length and response score; the intraclass correlation coefficient between reviewers; and the percentage of maximum score achieved by each group in different subject categories. Data collection Sourcing of medical cases All cases from the Swedish family medicine specialist examination from 2017 to 2022 were used for this study, totalling 48 cases (see for examples). These examinations are publicly available on the Swedish Association of General Practice (SFAM)’s website. The cases require comprehensive responses, typically consisting of several paragraphs of free text. They are often complex, involving symptoms indicative of various diseases and complicating factors such as social problems, addiction, poor compliance, legal aspects and patients near the end of life. provides a summary of the number of cases addressing different topics. Sourcing of doctor responses groups A and B Anonymous responses from past examinations were used. Group A: A digital random choice function was used to draw a single anonymous response for each case, from all the human responses submitted to the examination when it took place. Group B: The Swedish Association of General Practice, SFAM, has published an example of a top-tier response for each case. These responses were chosen arbitrarily by the examination reviewers as the best response for each question, in their opinion, and were used for Group B. Obtaining GPT-4 responses, group C Medical cases were sent to GPT-4 in an automated manner through OpenAI’s application programming interface, using the version of GPT-4 released on 3 August 2023. Apart from the case itself, additional instructions were sent along with each case to provide some context, based on the written instructions included in the 2022 examination (See ). A single response was collected for each case, without any follow-up questions (See ). A separate chat session was created for each case. Scoring the responses For each case, SFAM has published an evaluation guide that includes a few main points which should be included in a good answer, although the precise scoring guide used for the examination is not public. To quantify the performance of each examination response, the published evaluation guide for each case was adapted into a criteria-based scoring guide. Each scoring guide could award a score ranging from 0 to 10 points. This adaptation involved rephrasing each evaluation guide into a set of true-or-false criteria. The original evaluation guide was followed as closely as possible, but in cases where it was vaguely phrased, official Swedish medical guidelines were consulted to help formulate the criteria. For each criterion met, a specific number of points was awarded (see ). A group of three medical doctors, blinded to the origins of the responses, rated the responses using the scoring guide. Each response was scored by two of the three raters, and the average of their scores was used for the statistical analysis. The same pair of raters assessed all responses pertaining to the same case. The doctor creating the scoring guide is a specialist in general practice, whereas two of the reviewers are residents nearing the end of their residency, and one is a licensed doctor working in general practice. The evaluators were selected based on their expertise and availability. During the review process for this paper, OpenAI released GPT-4o, its latest flagship model. The experiment was subsequently repeated to include responses from GPT-4o. Due to limited availability, it was not possible to reassemble the original panel of evaluators; instead, a single evaluator scored the responses across all groups, including the new GPT-4o group. Statistical analysis Sample size calculation In the primary research question, we aimed to make two group comparisons, each producing a p value. Using the Bonferroni approach to adjust for multiple testing, the level of significance was set to 0.025. The power was set to 0.8 and minimal difference between groups to be detected was set to one point, which resulted in a required sample size of 48 cases. Data analysis After scoring the responses to all 48 cases, the difference between each doctor group and GPT-4 was calculated for each case. A paired t-test was used to compare each doctor group with GPT-4, pairing the scores by question. To assess the reliability of the averaged scores derived from the raters’ use of the scoring guide, we conducted an intraclass correlation coefficient (ICC) analysis, specifically employing the two-way mixed-effects model for the mean of k raters, using the psych package in R. In addition, we examined the differences in response length (number of words) between the top-tier and GPT-4 responses. A paired t-test was used paired by question. As a measure of the information density, we divided the score by the number of words for each response. Finally, a linear regression analysis was performed to explore the relationship between the difference in lengths and the difference in scores. The latter was set as the dependent variable and the former as the independent variable. The OLS function from the statsmodels library was employed for this analysis. Each individual true-or-false scoring criterion was assigned to a category by the author RA, such as ‘suggest diagnosis’ for points awarded for mentioning a possible diagnosis, and ‘patient history inquiry’' for points awarded for mentioning questions that should be asked of the patient. For more details and definitions of the categories, see . The top nine most common categories were used, and the rest were grouped under ‘other’'. These categories were then used to compare performance across different subject areas. For each category, we calculated the maximum score and the percentage of that score achieved by each group. The Wilcoxon signed-rank test was used to assess the significance of the difference between top-tier and random doctor responses, as well as between GPT-4 and random doctor responses, using the differences in scores paired by scoring criteria. This study compared the performance of GPT-4 with responses from human doctors on cases from the Swedish family medicine specialist examination. The responses from three distinct groups were scored and compared: (A) randomly selected doctor responses, (B) top-tier doctor responses and (C) responses generated by GPT-4. The objective was to compare GPT-4 to real doctors, regarding their ability to write comprehensive assessments of complex cases from primary care. Primary outcome measure The mean difference in scores between GPT-4 and randomly selected responses by human doctors, as well as between GPT-4 and top-tier responses. Secondary outcome measures The correlation between differences in response length and response score; the intraclass correlation coefficient between reviewers; and the percentage of maximum score achieved by each group in different subject categories. The mean difference in scores between GPT-4 and randomly selected responses by human doctors, as well as between GPT-4 and top-tier responses. The correlation between differences in response length and response score; the intraclass correlation coefficient between reviewers; and the percentage of maximum score achieved by each group in different subject categories. Sourcing of medical cases All cases from the Swedish family medicine specialist examination from 2017 to 2022 were used for this study, totalling 48 cases (see for examples). These examinations are publicly available on the Swedish Association of General Practice (SFAM)’s website. The cases require comprehensive responses, typically consisting of several paragraphs of free text. They are often complex, involving symptoms indicative of various diseases and complicating factors such as social problems, addiction, poor compliance, legal aspects and patients near the end of life. provides a summary of the number of cases addressing different topics. Sourcing of doctor responses groups A and B Anonymous responses from past examinations were used. Group A: A digital random choice function was used to draw a single anonymous response for each case, from all the human responses submitted to the examination when it took place. Group B: The Swedish Association of General Practice, SFAM, has published an example of a top-tier response for each case. These responses were chosen arbitrarily by the examination reviewers as the best response for each question, in their opinion, and were used for Group B. Obtaining GPT-4 responses, group C Medical cases were sent to GPT-4 in an automated manner through OpenAI’s application programming interface, using the version of GPT-4 released on 3 August 2023. Apart from the case itself, additional instructions were sent along with each case to provide some context, based on the written instructions included in the 2022 examination (See ). A single response was collected for each case, without any follow-up questions (See ). A separate chat session was created for each case. Scoring the responses For each case, SFAM has published an evaluation guide that includes a few main points which should be included in a good answer, although the precise scoring guide used for the examination is not public. To quantify the performance of each examination response, the published evaluation guide for each case was adapted into a criteria-based scoring guide. Each scoring guide could award a score ranging from 0 to 10 points. This adaptation involved rephrasing each evaluation guide into a set of true-or-false criteria. The original evaluation guide was followed as closely as possible, but in cases where it was vaguely phrased, official Swedish medical guidelines were consulted to help formulate the criteria. For each criterion met, a specific number of points was awarded (see ). A group of three medical doctors, blinded to the origins of the responses, rated the responses using the scoring guide. Each response was scored by two of the three raters, and the average of their scores was used for the statistical analysis. The same pair of raters assessed all responses pertaining to the same case. The doctor creating the scoring guide is a specialist in general practice, whereas two of the reviewers are residents nearing the end of their residency, and one is a licensed doctor working in general practice. The evaluators were selected based on their expertise and availability. During the review process for this paper, OpenAI released GPT-4o, its latest flagship model. The experiment was subsequently repeated to include responses from GPT-4o. Due to limited availability, it was not possible to reassemble the original panel of evaluators; instead, a single evaluator scored the responses across all groups, including the new GPT-4o group. All cases from the Swedish family medicine specialist examination from 2017 to 2022 were used for this study, totalling 48 cases (see for examples). These examinations are publicly available on the Swedish Association of General Practice (SFAM)’s website. The cases require comprehensive responses, typically consisting of several paragraphs of free text. They are often complex, involving symptoms indicative of various diseases and complicating factors such as social problems, addiction, poor compliance, legal aspects and patients near the end of life. provides a summary of the number of cases addressing different topics. Anonymous responses from past examinations were used. Group A: A digital random choice function was used to draw a single anonymous response for each case, from all the human responses submitted to the examination when it took place. Group B: The Swedish Association of General Practice, SFAM, has published an example of a top-tier response for each case. These responses were chosen arbitrarily by the examination reviewers as the best response for each question, in their opinion, and were used for Group B. Medical cases were sent to GPT-4 in an automated manner through OpenAI’s application programming interface, using the version of GPT-4 released on 3 August 2023. Apart from the case itself, additional instructions were sent along with each case to provide some context, based on the written instructions included in the 2022 examination (See ). A single response was collected for each case, without any follow-up questions (See ). A separate chat session was created for each case. For each case, SFAM has published an evaluation guide that includes a few main points which should be included in a good answer, although the precise scoring guide used for the examination is not public. To quantify the performance of each examination response, the published evaluation guide for each case was adapted into a criteria-based scoring guide. Each scoring guide could award a score ranging from 0 to 10 points. This adaptation involved rephrasing each evaluation guide into a set of true-or-false criteria. The original evaluation guide was followed as closely as possible, but in cases where it was vaguely phrased, official Swedish medical guidelines were consulted to help formulate the criteria. For each criterion met, a specific number of points was awarded (see ). A group of three medical doctors, blinded to the origins of the responses, rated the responses using the scoring guide. Each response was scored by two of the three raters, and the average of their scores was used for the statistical analysis. The same pair of raters assessed all responses pertaining to the same case. The doctor creating the scoring guide is a specialist in general practice, whereas two of the reviewers are residents nearing the end of their residency, and one is a licensed doctor working in general practice. The evaluators were selected based on their expertise and availability. During the review process for this paper, OpenAI released GPT-4o, its latest flagship model. The experiment was subsequently repeated to include responses from GPT-4o. Due to limited availability, it was not possible to reassemble the original panel of evaluators; instead, a single evaluator scored the responses across all groups, including the new GPT-4o group. Sample size calculation In the primary research question, we aimed to make two group comparisons, each producing a p value. Using the Bonferroni approach to adjust for multiple testing, the level of significance was set to 0.025. The power was set to 0.8 and minimal difference between groups to be detected was set to one point, which resulted in a required sample size of 48 cases. Data analysis After scoring the responses to all 48 cases, the difference between each doctor group and GPT-4 was calculated for each case. A paired t-test was used to compare each doctor group with GPT-4, pairing the scores by question. To assess the reliability of the averaged scores derived from the raters’ use of the scoring guide, we conducted an intraclass correlation coefficient (ICC) analysis, specifically employing the two-way mixed-effects model for the mean of k raters, using the psych package in R. In addition, we examined the differences in response length (number of words) between the top-tier and GPT-4 responses. A paired t-test was used paired by question. As a measure of the information density, we divided the score by the number of words for each response. Finally, a linear regression analysis was performed to explore the relationship between the difference in lengths and the difference in scores. The latter was set as the dependent variable and the former as the independent variable. The OLS function from the statsmodels library was employed for this analysis. Each individual true-or-false scoring criterion was assigned to a category by the author RA, such as ‘suggest diagnosis’ for points awarded for mentioning a possible diagnosis, and ‘patient history inquiry’' for points awarded for mentioning questions that should be asked of the patient. For more details and definitions of the categories, see . The top nine most common categories were used, and the rest were grouped under ‘other’'. These categories were then used to compare performance across different subject areas. For each category, we calculated the maximum score and the percentage of that score achieved by each group. The Wilcoxon signed-rank test was used to assess the significance of the difference between top-tier and random doctor responses, as well as between GPT-4 and random doctor responses, using the differences in scores paired by scoring criteria. In the primary research question, we aimed to make two group comparisons, each producing a p value. Using the Bonferroni approach to adjust for multiple testing, the level of significance was set to 0.025. The power was set to 0.8 and minimal difference between groups to be detected was set to one point, which resulted in a required sample size of 48 cases. After scoring the responses to all 48 cases, the difference between each doctor group and GPT-4 was calculated for each case. A paired t-test was used to compare each doctor group with GPT-4, pairing the scores by question. To assess the reliability of the averaged scores derived from the raters’ use of the scoring guide, we conducted an intraclass correlation coefficient (ICC) analysis, specifically employing the two-way mixed-effects model for the mean of k raters, using the psych package in R. In addition, we examined the differences in response length (number of words) between the top-tier and GPT-4 responses. A paired t-test was used paired by question. As a measure of the information density, we divided the score by the number of words for each response. Finally, a linear regression analysis was performed to explore the relationship between the difference in lengths and the difference in scores. The latter was set as the dependent variable and the former as the independent variable. The OLS function from the statsmodels library was employed for this analysis. Each individual true-or-false scoring criterion was assigned to a category by the author RA, such as ‘suggest diagnosis’ for points awarded for mentioning a possible diagnosis, and ‘patient history inquiry’' for points awarded for mentioning questions that should be asked of the patient. For more details and definitions of the categories, see . The top nine most common categories were used, and the rest were grouped under ‘other’'. These categories were then used to compare performance across different subject areas. For each category, we calculated the maximum score and the percentage of that score achieved by each group. The Wilcoxon signed-rank test was used to assess the significance of the difference between top-tier and random doctor responses, as well as between GPT-4 and random doctor responses, using the differences in scores paired by scoring criteria. GPT-4 scored lower than any doctor group . The differences between groups were statistically significant . For examples of responses, see . The complete scores are available in a public repository. The intraclass correlation coefficient for the scores from the three raters was 0.92 (95% CI 0.90 to 0.94, p<0.001), demonstrating the excellent reliability of the scoring guide. The results of the repeated experiment with GPT-4o are not included in the above tables, as a single evaluator scored all groups, making these scores not directly comparable with the original results. However, the original findings were confirmed. Additionally, GPT-4o scored an average of 0.7 points higher than GPT-4 (p=0.024), though random doctor responses continued to outperform GPT-4o, with an average of 0.7 points higher (p=0.044). The top-tier responses were on average 60 words longer than GPT-4’s (p<0.001, 95% CI 30 to 97). The correlation between differences in length and differences in scores of responses between GPT-4 and the top-tier answers was not statistically significant (p=0.11). The percentage of the total maximum score for each subject category achieved by each group is illustrated in . More details about the definition of each category, as well as illustrative examples, are available in . The main finding was that GPT-4 scored significantly lower than any group of doctors on the Swedish family medicine specialist examination, with top-tier responses scoring almost three points higher . This statistically significant difference indicates that graduating specialists in general practice perform better than GPT-4 in writing comprehensive assessments of complex primary care cases. What such a difference corresponds to in practice differs a lot from case to case. For example, in one case, GPT-4 scored 2.75 points lower than the top-tier response due to mentioning one fewer important differential diagnosis and two fewer aspects of treatment and follow-up. Generally, it appears that GPT-4 significantly lags behind the random doctor responses in critical areas such as suggesting relevant diagnoses, laboratory tests, physical examinations, referrals and addressing legal matters. For any general practitioners currently using GPT-4, this finding is concerning, as these are precisely the areas where one might seek guidance. For patients and the general public, these findings underscore the importance of maintaining human oversight in medical decision-making. The information density was higher for the two doctor groups than for GPT-4, indicating that human doctors are better at conveying relevant information concisely. Despite these limitations, GPT-4’s performance is impressive, considering it is not a registered medical device and has not been specifically trained for medical use. The repeated experiment with GPT-4o demonstrates a meaningful advancement, suggesting that the performance of general-purpose chatbots is approaching that of graduating specialists in general medicine, though it has not yet reached equivalent levels. There was also a significant difference between the top-tier and randomly selected doctor responses, raising the question of what requirements should be met by a medical chatbot. Is it enough for it to perform better than the average doctor, or should it aim to match or exceed the best responses from a group of doctors? Comparison with the existing literature In one study, GPT-4 passed every test in a series of dermatology licensing examinations, achieving over 80% for the English version (pass level: 60%). No data were presented on the performance of real dermatologists for comparison. On the other hand, the average score of GPT-3.5 was only 60.17% on the general practice licensing examination of the UK (pass level ≈ 70%), and it scored 41.6% on the corresponding Taiwanese licensing examination (pass level=60%). This aligns well with our results, even though we used GPT-4. Both these studies, and several similar studies in other medical disciplines, used multiple choice questions, which is a task very different from providing free-text responses to complex clinical cases. Providing free-text answers more closely resembles the requirements of a chatbot used for decision support in clinical practice. Many used GPT-3.5, which may perform significantly worse than GPT-4. One study examined questions posted by patients online, on a forum to which volunteering doctors responded. In the study, three licensed healthcare professionals evaluated the free-text responses. In 79% of the cases, they favoured GPT-3.5 responses over the doctors and the quality score was 21% lower for doctors on average, as scored on a five-category ordinal scale. These findings are opposite to the findings of our study, where the randomly selected doctors’ responses scored higher in 71% of the cases, even though GPT-4 was used. The questions and responses in the patient forum were typically shorter and simpler than the primary care cases used in our study, and the responses were not assessed on specific medical criteria. In a recent preprint, a novel chatbot AI, named AMIE, has been fine tuned to perform a diagnostic interview with a patient through chat. It was compared with general practitioners on objective structured clinical examination cases and outperformed general practitioners on most metrics, including suggesting relevant differential diagnoses. This suggests that higher performance is already possible from AI models, but evaluating GPT-4 is still highly relevant since it is widely accessible and may hypothetically already be used by patients and clinicians. Strengths and limitations This is the first study of GPT-4 performance on complex primary care cases with long-form free-text responses, rather than multiple choice. As such, it mimics the scenario where a clinician posts a case summary of a real patient in order to get input on the management. The scoring system was a relatively clear way to quantify the amount of useful content in each answer and demonstrated excellent reliability. No penalty was given to superfluous content, however, which could favour respondents writing longer, but less relevant, responses. The cases used in our study are representative of Swedish primary care, which may differ somewhat from other countries. This should be taken into account when generalising our results to other countries. The set of instructions sent to GPT-4 with each case, sometimes called the ‘prompt’, may influence the quality of responses. This is its own area of research, and optimising the prompt was beyond the scope of this study (see for the rationale behind our choice of prompt). The cases used in the study are publicly available online and could have been part of GPT-4’s training data, but the correct answers are not available in direct association with the questions, so we find it unlikely that this would have affected the result. In some cases, the reviewers could guess which answer was written by GPT-4, which may have introduced some bias. However, the impact of this bias was likely reduced by the use of the scoring guide, which focused on the presence and absence of specific criteria rather than an overall subjective assessment of the answer quality. The categorisation of the scoring criteria was conducted by a single researcher. While the extensive number of individual criteria may have mitigated the impact of any potential misclassification, it remains a limitation. Alternative categorisation methods, such as organising criteria by the field of medicine or broader categories like ‘diagnostics’, might have highlighted different aspects of GPT-4’s performance. Implications for current practice and future research GPT-4 falls short in medical accuracy when writing comprehensive assessments of complex primary care cases, compared with human doctors. The difference in performance is both statistically significant and clinically relevant. Hence, case assessments by GPT-4, should not be used directly by primary care doctors. Nor should GPT-4 be implemented as a doctor or nurse substitute for patients. However, newer versions, such as GPT-4o, show promising improvements, and continued advancements in general-purpose chatbots may bring their performance closer to that of human specialists in primary care. Future research on medical chatbots should focus on evaluating emerging models on representative questions asked by clinicians and patients in a clinical setting. At the same time, in line with the previously mentioned AMIE medical chatbot, researchers and developers should aim to optimise the performance of such chatbots, for example, by training them specifically on reliable medical information, optimising prompt engineering techniques, using algorithms for processing a single question in multiple steps or allowing the chatbots access to external sources of information and tools, including other categories of AI-models. Our study indicates that significant enhancements over GPT-4’s performance are necessary, particularly in the areas of suggesting relevant diagnoses, laboratory tests, physical examinations, referrals and addressing legal matters. If reliable medical chatbots are developed, they could profoundly impact general practice. Initial contact, triage and management of simple cases could conceivably be handled directly by a medical chatbot. Additionally, these chatbots could serve as constantly available expert advisors for medical staff. In one study, GPT-4 passed every test in a series of dermatology licensing examinations, achieving over 80% for the English version (pass level: 60%). No data were presented on the performance of real dermatologists for comparison. On the other hand, the average score of GPT-3.5 was only 60.17% on the general practice licensing examination of the UK (pass level ≈ 70%), and it scored 41.6% on the corresponding Taiwanese licensing examination (pass level=60%). This aligns well with our results, even though we used GPT-4. Both these studies, and several similar studies in other medical disciplines, used multiple choice questions, which is a task very different from providing free-text responses to complex clinical cases. Providing free-text answers more closely resembles the requirements of a chatbot used for decision support in clinical practice. Many used GPT-3.5, which may perform significantly worse than GPT-4. One study examined questions posted by patients online, on a forum to which volunteering doctors responded. In the study, three licensed healthcare professionals evaluated the free-text responses. In 79% of the cases, they favoured GPT-3.5 responses over the doctors and the quality score was 21% lower for doctors on average, as scored on a five-category ordinal scale. These findings are opposite to the findings of our study, where the randomly selected doctors’ responses scored higher in 71% of the cases, even though GPT-4 was used. The questions and responses in the patient forum were typically shorter and simpler than the primary care cases used in our study, and the responses were not assessed on specific medical criteria. In a recent preprint, a novel chatbot AI, named AMIE, has been fine tuned to perform a diagnostic interview with a patient through chat. It was compared with general practitioners on objective structured clinical examination cases and outperformed general practitioners on most metrics, including suggesting relevant differential diagnoses. This suggests that higher performance is already possible from AI models, but evaluating GPT-4 is still highly relevant since it is widely accessible and may hypothetically already be used by patients and clinicians. This is the first study of GPT-4 performance on complex primary care cases with long-form free-text responses, rather than multiple choice. As such, it mimics the scenario where a clinician posts a case summary of a real patient in order to get input on the management. The scoring system was a relatively clear way to quantify the amount of useful content in each answer and demonstrated excellent reliability. No penalty was given to superfluous content, however, which could favour respondents writing longer, but less relevant, responses. The cases used in our study are representative of Swedish primary care, which may differ somewhat from other countries. This should be taken into account when generalising our results to other countries. The set of instructions sent to GPT-4 with each case, sometimes called the ‘prompt’, may influence the quality of responses. This is its own area of research, and optimising the prompt was beyond the scope of this study (see for the rationale behind our choice of prompt). The cases used in the study are publicly available online and could have been part of GPT-4’s training data, but the correct answers are not available in direct association with the questions, so we find it unlikely that this would have affected the result. In some cases, the reviewers could guess which answer was written by GPT-4, which may have introduced some bias. However, the impact of this bias was likely reduced by the use of the scoring guide, which focused on the presence and absence of specific criteria rather than an overall subjective assessment of the answer quality. The categorisation of the scoring criteria was conducted by a single researcher. While the extensive number of individual criteria may have mitigated the impact of any potential misclassification, it remains a limitation. Alternative categorisation methods, such as organising criteria by the field of medicine or broader categories like ‘diagnostics’, might have highlighted different aspects of GPT-4’s performance. GPT-4 falls short in medical accuracy when writing comprehensive assessments of complex primary care cases, compared with human doctors. The difference in performance is both statistically significant and clinically relevant. Hence, case assessments by GPT-4, should not be used directly by primary care doctors. Nor should GPT-4 be implemented as a doctor or nurse substitute for patients. However, newer versions, such as GPT-4o, show promising improvements, and continued advancements in general-purpose chatbots may bring their performance closer to that of human specialists in primary care. Future research on medical chatbots should focus on evaluating emerging models on representative questions asked by clinicians and patients in a clinical setting. At the same time, in line with the previously mentioned AMIE medical chatbot, researchers and developers should aim to optimise the performance of such chatbots, for example, by training them specifically on reliable medical information, optimising prompt engineering techniques, using algorithms for processing a single question in multiple steps or allowing the chatbots access to external sources of information and tools, including other categories of AI-models. Our study indicates that significant enhancements over GPT-4’s performance are necessary, particularly in the areas of suggesting relevant diagnoses, laboratory tests, physical examinations, referrals and addressing legal matters. If reliable medical chatbots are developed, they could profoundly impact general practice. Initial contact, triage and management of simple cases could conceivably be handled directly by a medical chatbot. Additionally, these chatbots could serve as constantly available expert advisors for medical staff. 10.1136/bmjopen-2024-086148 online supplemental file 1 10.1136/bmjopen-2024-086148 online supplemental file 2 10.1136/bmjopen-2024-086148 online supplemental file 3 |
Identification of novel proteins associated with intelligence by integrating genome-wide association data and human brain proteomics | 49d4f377-8137-4b6a-b07a-291c8541a1eb | 11844858 | Biochemistry[mh] | Intelligence refers to an individual’s ability to learn from experience, adapt, shape, and select environments, and is a frontier field in behavioral genetics research . Intelligence has public health significance as it impacts academic performance, future personal health, and social well-being . As a typical complex trait, intelligence is influenced by both genetic and environmental factors and exhibits high heritability. Intelligence is more predictive of important educational, occupational, and health outcomes than any other trait. In the 1970s and 1980s, debates over the genetic versus environmental influences on intelligence spurred larger and higher-quality family, twin, and adoption studies. These studies consistently demonstrated that genetics play a significant role in individual differences in intelligence. Recent genome-wide association studies (GWAS) have successfully identified genetic sequence variations that account for 20% of the 50% heritability of intelligence . Furthermore, a meta-analysis of GWAS in 269,867 individuals clarified the genetic associations with intelligence, identifying 205 associated genomic loci (190 of which were novel) and 1,016 related genes (939 of which were novel) . These genes provide new insights for exploring the molecular mechanisms of intelligence. Proteins are the most effective biomarkers and therapeutic targets as they represent the primary functional components of cellular and biological processes and are the final products of gene expression . Advances in mass spectrometry and spatial proteomics have enabled high-resolution mapping of protein networks in the human brain, providing a foundation for linking genetic variation to cognitive traits . Previous studies have found that certain specific proteins are associated with intelligence or neurodegenerative diseases, such as NRX1A and periostin . Recent research further indicates a significant association between proteins and intelligence traits . Exploring proteins in greater depth can help us uncover the biological basis of intelligence and provide new avenues for enhancing cognitive function. Transcriptome-wide association studies (TWAS) are a method used to investigate the correlation between the transcriptome and each genomic locus . Similarly, proteome-wide association studies (PWAS) integrate GWAS data with proteomics data to identify candidate genes associated with a given trait . In this study, we integrated intelligence GWAS data with human brain proteomics PWAS to identify risk genes associated with the proteome and transcriptome of intelligence. 2.1. Data sources 2.1.1. GWAS summary statistics. We utilized the most extensive available intelligence meta-GWAS summary statistics, published by Savage et al. in 2018 . The sample consists of 269,867 individuals from 14 independent epidemiological cohorts of European ancestry, including 9,295,118 genetic variation loci that passed quality testing. 2.1.2. Brain proteomic and genetic data. We used the discovery dataset from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) and the Banner Sun Health Research Institute (Banner) as the replication dataset. Protein data were obtained from human dorsolateral prefrontal cortex (dPFC) tissues, and matched genotyping was performed. Proteomic analysis utilized isobaric tandem mass tag peptide labeling followed by liquid chromatography-mass spectrometry. Participants in the ROS/MAP cohort underwent genotyping using either whole-genome sequencing or genome-wide genotyping with platforms such as the Illumina OmniQuad Express or Affymetrix GeneChip 6.0. The detailed method can be described by Wingo et al . After processing, the PWAS included 8,356 proteins from 376 individuals in the ROS/MAP dataset and 8,168 proteins from 152 individuals in the Banner dataset. 2.1.3. Brain transcriptomic data. The study analyzed brain transcriptome data from postmortem samples of 783 individuals of European descent, drawn from the ROS/MAP, Mount Sinai Brain Bank, and Mayo studies. The primary focus was on gene expression in the dorsolateral prefrontal cortex (dPFC), alongside other regions including the frontal cortex, temporal cortex, inferior frontal gyrus, superior temporal gyrus, and perirhinal gyrus. RNA-seq data underwent comprehensive quality control and normalization, as previously outlined . Additionally, genome-wide genotyping was conducted for participants with transcriptomic data, a total of 13,650 genes from 888 reference brain transcriptomes were retained for the TWAS after quality control. 2.2. Statistical approach 2.2.1. PWAS and TWAS. We used the FUSION standard process to integrate brain protein/gene data with intelligence GWAS. Specifically, we first screened out proteins/genes with significant heritability based on heritability ( P < 0.01). Five different predictive models (top1, blup, lasso, ennet, and bslmm) were then used to construct protein models, and the best model for each protein/gene was selected based on its predictive power. Next, the effect size Z value of intelligence GWAS was calculated, which represents the standardized score quantifying the deviation of the effect size of a given protein/gene from the mean effect size. This Z value was then weighted by the selected predictive model to estimate the protein/gene effect on intelligence. For PWAS results, we performed multiple tests using Bonferroni correction, and proteins with PWAS. P < 2.86 × 10 −5 (0.05/1749) were considered significant. For TWAS results, false discovery rate (FDR) correction was used, and genes with P < 0.05 after correction were considered significantly correlated with intelligence. 2.2.2. Causal analysis. To determine causal relationships from our PWAS findings, we utilized two independent methods. For Bayesian colocalization analysis , we used the COLOC tool within the FUSION software to estimate the posterior probability that the same variant affects both GWAS and protein quantitative trait locus (pQTL) signals. Under this framework, five hypotheses (H0 to H4) were evaluated, with H4 suggesting a shared causal SNP. Causality was established if the posterior probability for H4 exceeded 0.5. To further validate these relationships, we applied the SMR method , using pQTL data and intelligence GWAS data. Significant causal associations were confirmed with an adjusted P -value < 0.05 for SMR and an unadjusted P -value > 0.05 for the HEIDI test. 2.2.3. PPI and GO enrichment. For the investigation of causal genes implicated in three diseases, we employed the STRING database to perform an extensive network analysis. In this visualization, the thickness of the line represents the strength of the interaction between two nodes, and we only reserved connections with an interaction score greater than 0.4, with different node colors representing different protein communities. Additionally, we conducted functional enrichment analysis for causal genes pertinent to three categories of diseases using the Metascape online platform . We select the pathways with P < 0.05 (with FDR adjusted) as the significant result. 2.1.1. GWAS summary statistics. We utilized the most extensive available intelligence meta-GWAS summary statistics, published by Savage et al. in 2018 . The sample consists of 269,867 individuals from 14 independent epidemiological cohorts of European ancestry, including 9,295,118 genetic variation loci that passed quality testing. 2.1.2. Brain proteomic and genetic data. We used the discovery dataset from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) and the Banner Sun Health Research Institute (Banner) as the replication dataset. Protein data were obtained from human dorsolateral prefrontal cortex (dPFC) tissues, and matched genotyping was performed. Proteomic analysis utilized isobaric tandem mass tag peptide labeling followed by liquid chromatography-mass spectrometry. Participants in the ROS/MAP cohort underwent genotyping using either whole-genome sequencing or genome-wide genotyping with platforms such as the Illumina OmniQuad Express or Affymetrix GeneChip 6.0. The detailed method can be described by Wingo et al . After processing, the PWAS included 8,356 proteins from 376 individuals in the ROS/MAP dataset and 8,168 proteins from 152 individuals in the Banner dataset. 2.1.3. Brain transcriptomic data. The study analyzed brain transcriptome data from postmortem samples of 783 individuals of European descent, drawn from the ROS/MAP, Mount Sinai Brain Bank, and Mayo studies. The primary focus was on gene expression in the dorsolateral prefrontal cortex (dPFC), alongside other regions including the frontal cortex, temporal cortex, inferior frontal gyrus, superior temporal gyrus, and perirhinal gyrus. RNA-seq data underwent comprehensive quality control and normalization, as previously outlined . Additionally, genome-wide genotyping was conducted for participants with transcriptomic data, a total of 13,650 genes from 888 reference brain transcriptomes were retained for the TWAS after quality control. We utilized the most extensive available intelligence meta-GWAS summary statistics, published by Savage et al. in 2018 . The sample consists of 269,867 individuals from 14 independent epidemiological cohorts of European ancestry, including 9,295,118 genetic variation loci that passed quality testing. We used the discovery dataset from the Religious Order Study and Rush Memory and Aging Project (ROS/MAP) and the Banner Sun Health Research Institute (Banner) as the replication dataset. Protein data were obtained from human dorsolateral prefrontal cortex (dPFC) tissues, and matched genotyping was performed. Proteomic analysis utilized isobaric tandem mass tag peptide labeling followed by liquid chromatography-mass spectrometry. Participants in the ROS/MAP cohort underwent genotyping using either whole-genome sequencing or genome-wide genotyping with platforms such as the Illumina OmniQuad Express or Affymetrix GeneChip 6.0. The detailed method can be described by Wingo et al . After processing, the PWAS included 8,356 proteins from 376 individuals in the ROS/MAP dataset and 8,168 proteins from 152 individuals in the Banner dataset. The study analyzed brain transcriptome data from postmortem samples of 783 individuals of European descent, drawn from the ROS/MAP, Mount Sinai Brain Bank, and Mayo studies. The primary focus was on gene expression in the dorsolateral prefrontal cortex (dPFC), alongside other regions including the frontal cortex, temporal cortex, inferior frontal gyrus, superior temporal gyrus, and perirhinal gyrus. RNA-seq data underwent comprehensive quality control and normalization, as previously outlined . Additionally, genome-wide genotyping was conducted for participants with transcriptomic data, a total of 13,650 genes from 888 reference brain transcriptomes were retained for the TWAS after quality control. 2.2.1. PWAS and TWAS. We used the FUSION standard process to integrate brain protein/gene data with intelligence GWAS. Specifically, we first screened out proteins/genes with significant heritability based on heritability ( P < 0.01). Five different predictive models (top1, blup, lasso, ennet, and bslmm) were then used to construct protein models, and the best model for each protein/gene was selected based on its predictive power. Next, the effect size Z value of intelligence GWAS was calculated, which represents the standardized score quantifying the deviation of the effect size of a given protein/gene from the mean effect size. This Z value was then weighted by the selected predictive model to estimate the protein/gene effect on intelligence. For PWAS results, we performed multiple tests using Bonferroni correction, and proteins with PWAS. P < 2.86 × 10 −5 (0.05/1749) were considered significant. For TWAS results, false discovery rate (FDR) correction was used, and genes with P < 0.05 after correction were considered significantly correlated with intelligence. 2.2.2. Causal analysis. To determine causal relationships from our PWAS findings, we utilized two independent methods. For Bayesian colocalization analysis , we used the COLOC tool within the FUSION software to estimate the posterior probability that the same variant affects both GWAS and protein quantitative trait locus (pQTL) signals. Under this framework, five hypotheses (H0 to H4) were evaluated, with H4 suggesting a shared causal SNP. Causality was established if the posterior probability for H4 exceeded 0.5. To further validate these relationships, we applied the SMR method , using pQTL data and intelligence GWAS data. Significant causal associations were confirmed with an adjusted P -value < 0.05 for SMR and an unadjusted P -value > 0.05 for the HEIDI test. 2.2.3. PPI and GO enrichment. For the investigation of causal genes implicated in three diseases, we employed the STRING database to perform an extensive network analysis. In this visualization, the thickness of the line represents the strength of the interaction between two nodes, and we only reserved connections with an interaction score greater than 0.4, with different node colors representing different protein communities. Additionally, we conducted functional enrichment analysis for causal genes pertinent to three categories of diseases using the Metascape online platform . We select the pathways with P < 0.05 (with FDR adjusted) as the significant result. We used the FUSION standard process to integrate brain protein/gene data with intelligence GWAS. Specifically, we first screened out proteins/genes with significant heritability based on heritability ( P < 0.01). Five different predictive models (top1, blup, lasso, ennet, and bslmm) were then used to construct protein models, and the best model for each protein/gene was selected based on its predictive power. Next, the effect size Z value of intelligence GWAS was calculated, which represents the standardized score quantifying the deviation of the effect size of a given protein/gene from the mean effect size. This Z value was then weighted by the selected predictive model to estimate the protein/gene effect on intelligence. For PWAS results, we performed multiple tests using Bonferroni correction, and proteins with PWAS. P < 2.86 × 10 −5 (0.05/1749) were considered significant. For TWAS results, false discovery rate (FDR) correction was used, and genes with P < 0.05 after correction were considered significantly correlated with intelligence. To determine causal relationships from our PWAS findings, we utilized two independent methods. For Bayesian colocalization analysis , we used the COLOC tool within the FUSION software to estimate the posterior probability that the same variant affects both GWAS and protein quantitative trait locus (pQTL) signals. Under this framework, five hypotheses (H0 to H4) were evaluated, with H4 suggesting a shared causal SNP. Causality was established if the posterior probability for H4 exceeded 0.5. To further validate these relationships, we applied the SMR method , using pQTL data and intelligence GWAS data. Significant causal associations were confirmed with an adjusted P -value < 0.05 for SMR and an unadjusted P -value > 0.05 for the HEIDI test. For the investigation of causal genes implicated in three diseases, we employed the STRING database to perform an extensive network analysis. In this visualization, the thickness of the line represents the strength of the interaction between two nodes, and we only reserved connections with an interaction score greater than 0.4, with different node colors representing different protein communities. Additionally, we conducted functional enrichment analysis for causal genes pertinent to three categories of diseases using the Metascape online platform . We select the pathways with P < 0.05 (with FDR adjusted) as the significant result. 3.1. Discovery PWAS of intelligence We integrated human brain proteomics with the latest intelligence GWAS results, using the FUSION pipeline to perform a PWAS on intelligence. The human brain proteome was generated from the dorsolateral prefrontal cortex (dPFC) of 376 European ancestry participants from the ROS/MAP. After quality control, the proteome consisted of 8,356 proteins, of which 1,469 had significant single nucleotide polymorphism (SNP) heritability ( P < 0.01) and were included in the PWAS. The intelligence GWAS summary statistics were sourced from the latest genome-wide association meta-analysis by Savage et al., which included 269,867 participants of European ancestry. The PWAS identified 44 genes whose cis-regulated brain protein levels were associated with intelligence (FDR P < 0.05) ( and ). To further evaluate whether cis-regulated brain protein expression mediated the association between these 44 genes’ genetic variation and intelligence, we applied COLOC and SMR analyses to the same discovery dataset . Multiple genes showed significant colocalization and causal associations (Supplementary Table S1 in ). The COLOC analysis revealed that 29 genes, including GPX1, had an extremely high probability of colocalization. The SMR analysis indicated that 37 genes, including GPX1, had significant causal relationships ( P < 0.05). We then performed heterogeneity testing using the HEIDI tool to distinguish between pleiotropy/causal effects and linkage relationships for these 37 genes. HEIDI results indicated that 10 of the 37 genes may be significant due to linkage disequilibrium, while 27 were consistent with pleiotropy or causal relationships (Supplementary Table S1 in ). SMR and HEIDI suggested that 36 genes, including GPX1, may be related to intelligence through cis-regulated brain protein abundance . A total of 20 genes, including GPX1, exhibited high colocalization probabilities and causality, confirmed by both COLOC and SMR analyses. 3.2. Replication PWAS of intelligence To increase the credibility of our findings, we performed a replication PWAS for intelligence using proteomic and GWAS results that were not included in our discovery analysis. The replication human brain proteome was generated from the dPFC of 152 European-ancestry participants recruited by the Banner Sun Health Research Institute. After quality control, the proteome consisted of 8,168 proteins, of which 1,139 proteins had significant SNP-based heritability ( P < 0.01) and were included in the replication PWAS. Seventeen genes were replicated in the independent PWAS for intelligence, providing greater confidence in our results ( and ). Of these, 10 genes were positively correlated and 7 were negatively correlated. Twenty-seven of the 44 significant proteins identified in the discovery PWAS were not detected in the replication PWAS. CRAT, MAP2K2, and TMEM245 were analyzed, but the results in the replication cohort were not significant ( P > 0.05) . 3.3. Examination of the potential intelligence-related proteins at the mRNA level The brain transcriptome data for this study were primarily derived from postmortem brain samples of 783 European ancestry participants from the ROS/MAP, Mayo, and Mount Sinai Brain Bank studies, focusing on the frontal cortex. Among the 13,650 mRNAs that passed quality control, 6,735 exhibited significant SNP-based heritability and were included in the TWAS. The intelligence TWAS using the FUSION pipeline identified 20 genes whose cis-regulated brain mRNA expression was associated with intelligence (FDR P < 0.05) (Supplementary Table S2 in ). All 44 proteins identified in the discovery PWAS were analyzed at the mRNA level; however, only 28 of them, including GPX1, exhibited significant SNP-based mRNA heritability estimates (Supplementary Table S2 in ). The TWAS revealed that 20 of these 28 genes had nominally significant associations with intelligence at the cis-regulated mRNA level, with 10 of these genes showing consistent directionality of effects on both mRNA and protein levels. Additionally, among the 44 intelligence-related genes, 16 genes showed no evidence of association with intelligence at the mRNA level in TWAS, including those that were not heritable and thus not included in the analysis. Interestingly, 6 of these 16 genes had significant findings in the discovery PWAS and were replicated (GPT, MAP2K2, KHK, CCBL2, PLEKHA1, and FLOT2; ). This suggests that PWAS provides novel insights into the pathophysiological mechanisms of intelligence beyond what TWAS has revealed. 3.4. Enrichment Analysis of Pathways Based on Intelligence-Causal Genes To further identify the functions of the candidate proteins, we performed enrichment analysis using the coding genes of the proteins identified by PWAS. The result of enrichment revealed that intelligence-causative genes are significantly involved in various biological processes, including Salmonella infection, glucose response, small molecule metabolic processes, microtubule transport, cellular responses to oxidative stress, steroid metabolism, and intracellular protein transport . These findings were derived from proteomic data, providing insights into the functional roles of these proteins in intelligence-related pathways. 3.5. Protein-Protein Interaction Networks in Intelligence We investigated the connectivity among the 44 intelligence-related proteins identified in the PWAS using the STRING database and discovered a protein community based on protein-protein interactions (PPIs). A module is defined as a group of proteins that have tighter connections with each other than with other protein groups. Community 1 includes RANGAP1, CSE1L, and STAU1; Community 2 includes SND1, MAP2K2, RAF1, and DCC; Community 3 includes CWF19L1, ERLIN1, GPT, and PPP1R16A; and Community 4 includes MON1A and RAB5B . We integrated human brain proteomics with the latest intelligence GWAS results, using the FUSION pipeline to perform a PWAS on intelligence. The human brain proteome was generated from the dorsolateral prefrontal cortex (dPFC) of 376 European ancestry participants from the ROS/MAP. After quality control, the proteome consisted of 8,356 proteins, of which 1,469 had significant single nucleotide polymorphism (SNP) heritability ( P < 0.01) and were included in the PWAS. The intelligence GWAS summary statistics were sourced from the latest genome-wide association meta-analysis by Savage et al., which included 269,867 participants of European ancestry. The PWAS identified 44 genes whose cis-regulated brain protein levels were associated with intelligence (FDR P < 0.05) ( and ). To further evaluate whether cis-regulated brain protein expression mediated the association between these 44 genes’ genetic variation and intelligence, we applied COLOC and SMR analyses to the same discovery dataset . Multiple genes showed significant colocalization and causal associations (Supplementary Table S1 in ). The COLOC analysis revealed that 29 genes, including GPX1, had an extremely high probability of colocalization. The SMR analysis indicated that 37 genes, including GPX1, had significant causal relationships ( P < 0.05). We then performed heterogeneity testing using the HEIDI tool to distinguish between pleiotropy/causal effects and linkage relationships for these 37 genes. HEIDI results indicated that 10 of the 37 genes may be significant due to linkage disequilibrium, while 27 were consistent with pleiotropy or causal relationships (Supplementary Table S1 in ). SMR and HEIDI suggested that 36 genes, including GPX1, may be related to intelligence through cis-regulated brain protein abundance . A total of 20 genes, including GPX1, exhibited high colocalization probabilities and causality, confirmed by both COLOC and SMR analyses. To increase the credibility of our findings, we performed a replication PWAS for intelligence using proteomic and GWAS results that were not included in our discovery analysis. The replication human brain proteome was generated from the dPFC of 152 European-ancestry participants recruited by the Banner Sun Health Research Institute. After quality control, the proteome consisted of 8,168 proteins, of which 1,139 proteins had significant SNP-based heritability ( P < 0.01) and were included in the replication PWAS. Seventeen genes were replicated in the independent PWAS for intelligence, providing greater confidence in our results ( and ). Of these, 10 genes were positively correlated and 7 were negatively correlated. Twenty-seven of the 44 significant proteins identified in the discovery PWAS were not detected in the replication PWAS. CRAT, MAP2K2, and TMEM245 were analyzed, but the results in the replication cohort were not significant ( P > 0.05) . The brain transcriptome data for this study were primarily derived from postmortem brain samples of 783 European ancestry participants from the ROS/MAP, Mayo, and Mount Sinai Brain Bank studies, focusing on the frontal cortex. Among the 13,650 mRNAs that passed quality control, 6,735 exhibited significant SNP-based heritability and were included in the TWAS. The intelligence TWAS using the FUSION pipeline identified 20 genes whose cis-regulated brain mRNA expression was associated with intelligence (FDR P < 0.05) (Supplementary Table S2 in ). All 44 proteins identified in the discovery PWAS were analyzed at the mRNA level; however, only 28 of them, including GPX1, exhibited significant SNP-based mRNA heritability estimates (Supplementary Table S2 in ). The TWAS revealed that 20 of these 28 genes had nominally significant associations with intelligence at the cis-regulated mRNA level, with 10 of these genes showing consistent directionality of effects on both mRNA and protein levels. Additionally, among the 44 intelligence-related genes, 16 genes showed no evidence of association with intelligence at the mRNA level in TWAS, including those that were not heritable and thus not included in the analysis. Interestingly, 6 of these 16 genes had significant findings in the discovery PWAS and were replicated (GPT, MAP2K2, KHK, CCBL2, PLEKHA1, and FLOT2; ). This suggests that PWAS provides novel insights into the pathophysiological mechanisms of intelligence beyond what TWAS has revealed. To further identify the functions of the candidate proteins, we performed enrichment analysis using the coding genes of the proteins identified by PWAS. The result of enrichment revealed that intelligence-causative genes are significantly involved in various biological processes, including Salmonella infection, glucose response, small molecule metabolic processes, microtubule transport, cellular responses to oxidative stress, steroid metabolism, and intracellular protein transport . These findings were derived from proteomic data, providing insights into the functional roles of these proteins in intelligence-related pathways. We investigated the connectivity among the 44 intelligence-related proteins identified in the PWAS using the STRING database and discovered a protein community based on protein-protein interactions (PPIs). A module is defined as a group of proteins that have tighter connections with each other than with other protein groups. Community 1 includes RANGAP1, CSE1L, and STAU1; Community 2 includes SND1, MAP2K2, RAF1, and DCC; Community 3 includes CWF19L1, ERLIN1, GPT, and PPP1R16A; and Community 4 includes MON1A and RAB5B . Intelligence is a typical complex trait influenced by both genetic and environmental factors, exhibiting high heritability. It is more representative than any other characteristic in predicting significant educational, occupational, and health outcomes. For instance, there is ample evidence that intelligence has an independent causal relationship with the risk of Alzheimer’s disease (AD), attention deficit hyperactivity disorder (ADHD), and schizophrenia . Identifying genetic targets that influence intelligence is a critical objective in human genetics research, particularly significant for enhancing the understanding and development of cognitive abilities. Although previous studies have identified the functional relevance of tissue proteins and the development of brain function, the potential biological mechanisms between tissue proteins and intelligence remain to be elucidated . In this study, we employed a range of analytical techniques to investigate the functional associations between protein biomarkers in the brain and intelligence. We identified 44 candidate genes associated with changes in brain protein abundance related to intelligence. Among these, 17 genes were replicated in independent PWAS analyses of intelligence, providing higher confidence in our findings. Additionally, we discovered that GPX1 and 19 other genes exhibited co-localization and causal inference related to intelligence in the brain PWAS, while the associations of genes such as CSE1L with intelligence were supported at the brain transcript level. Enrichment analyses revealed that these genes participate in various biological processes, including responses to Salmonella infection, glucose metabolism, small molecule metabolic processes, microtubule transport, cellular responses to oxidative stress, steroid metabolism, and intracellular protein transport. These results suggest that these genes may collectively influence intelligence performance by regulating these critical pathways. Further analysis indicates that these genes may synergistically participate in the regulation of the target traits at the transcriptomic and proteomic levels, highlighting their potential roles in related biological mechanisms. This finding provides robust support and promising directions for subsequent mechanistic studies and the development of therapeutic targets. Our analysis involves genes previously studied in the context of intelligence. Prior research has identified GPT, an enzyme involved in brain amino acid metabolism, as a candidate gene for intelligence . Its function may be related to cognitive abilities and plays a crucial role in the complex behaviors of neurons. Additionally, studies have shown that the antioxidant enzyme GPX1 is widely expressed in brain tissue and is significantly associated with cognitive function . Moreover, dietary and exercise interventions can enhance cognitive function by regulating GPX levels , which aligns closely with our findings. Furthermore, CSE1L is associated with apoptosis and proliferation, demonstrating a strong correlation with intelligence performance in GWAS . Previous studies have indicated that patients with mutations in MAP2K2 may exhibit better functional preservation in intelligence . Specifically, in terms of neurodevelopmental functions, patients with mutations in the MAP2K2 gene show a lower incidence of intellectual disability (ID) compared to those with mutations in other genes, such as BRAF and MAP2K1, with an incidence rate of only 25%. Additionally, previous studies have identified NEK4, ERLIN1, PLCL1, SULT1A1, CYSTM1, and PLEKHA1 as candidate genes for intelligence, which aligns with our findings. Specifically, NEK4, one of the largest members of the NEK family, is involved in the DNA damage response. Consistent evidence suggests its association with schizophrenia and bipolar disorder . As a critical gene in cell cycle regulation, NEK4 may play a key role in neuronal proliferation and survival, thereby influencing intelligence performance. Furthermore, research has shown that PLCL1 is significantly associated with green exposure and is involved in neurotransmitter clearance, affecting the development of intelligence in children . Additionally, PLCL1 has been linked to hereditary dyslexia and ADHD , suggesting potential implications during the process of intelligence development. While SULT1A1 may have some association with intelligence, its function in the brain has not been thoroughly investigated, and further functional studies are needed to validate its specific role . CYSTM1 is a candidate gene that influences pregnancy and has been associated with body mass index and intelligence, indicating its significant role in developmental regulation . Additionally, PLEKHA1 is related to intelligence through its involvement in protein synthesis, energy metabolism, and amino acid metabolism . This study offers industrial feasibility in areas such as drug development, biomarker identification, and precision medicine by providing insights into proteins and genes associated with intelligence, which could inform therapeutic and diagnostic advancements for cognitive disorders. In conclusion, this study provides significant contributions to the understanding of the genetic and proteomic foundations of intelligence. We conducted the largest and most comprehensive pQTL analysis of intelligence PWAS to date, utilizing the latest summary statistics from GWAS. By replicating the PWAS with an independent human brain proteome and validating causal relationships through MR analyses, we strengthened the confidence in the identified risk proteins. The integration of PWAS and TWAS analyses allowed us to explore the complex relationships between mRNA and protein levels associated with intelligence while identifying four core protein modules CWF19L1, ERLIN1, GPT, and PPP1R16A through PPIs, shedding light on critical biological pathways that influence cognitive functions. However, the current study has several limitations. First, while pQTL and eQTL mapping provide valuable insights, they cannot fully capture all GWAS signals or comprehensively interpret the functional roles of genes in the biological pathways underlying intelligence. A single-layer analysis, such as at the protein level, may overlook critical interactions across molecular layers. Future studies incorporating multi-omics approaches, such as methylation quantitative trait loci (mQTL), single-cell sequencing, and whole-genome sequencing, are essential to uncover the complete molecular mechanisms associated with intelligence and to inform the development of tailored therapeutic strategies . Second, the limited sample size and racial specificity of the proteomic dataset may constrain the generalizability of the findings. Expanding the scale and diversity of brain proteomic data across different populations and age groups will be crucial for improving the robustness of the results, enabling more precise effect estimates, and ensuring broader applicability. Additionally, addressing potential technical biases introduced by varying genotyping platforms used across datasets could further enhance the reliability of the conclusions. S1 File Table S1. Colocalization and causal analysis results for intelligence genes. Table S2. TWAS Results for Intelligence. (DOCX) |
Assessing Caries Removal Efficacy and Pain Perception in Children, Using Smart Bur Versus Carbide Bur: A Randomized Clinical Study | db862d44-d090-4a5a-b6b9-f0b7532c6bdc | 11737275 | Dentistry[mh] | Dental hard tissues can frequently be affected by a persistent condition known as dental caries, which has significant effects on human health . The oral and systemic complications exacerbated by caries diminish quality of life and lead to a considerable financial burden for the affected individual . Dental caries is considered as a multifactorial disease resulting from the interaction of diet, microbial load, host factors, and time . The global prevalence of dental caries in primary teeth was reported to be 46.2% , whereas in Saudi Arabia, the prevalence ranged from 72% to 84% among children, as reported in a recent systematic review, indicating a major cause for concern . No matter the age, caries has a negative influence on all demographic groups . Caries management encompasses the administration of targeted interventions to halt the progression of existing dental caries and address active lesions that are non-self-cleansing, with the objective of managing caries development at the individual tooth level . Nonselective caries removal involves removal of soft and firm dentin, irrespective of the proximity of the carious lesion to the pulp , whereas selective caries removal involves selectively removing caries, considering the closeness to pulp, and therefore, sound dentin is left preserved . The former method is deemed nonconservative and excessive, with higher rates of pulp exposure, when compared with the latter . Over the years, the development of optimal tools and techniques for the efficient and prompt management of carious lesions has led to the use of hand instruments, metal carbide burs, diamond burs, chemo-mechanical preparations, air abrasion, and sono-abrasion . Conventional methods include the use of hand instruments, metal carbide burs, and diamond burs, whereas the use of chemo-mechanical preparations, air abrasion, and sono-abrasion are considered modern techniques . Attempts for the preservation of sound dentin and a conservative preparation is gaining more priority in contemporary clinical practice . Mechanical caries excavation techniques using carbide burs and diamond burs can remove non-decalcified enamel as well as dentin, but it cannot differentiate between carious and sound dentin during cavity preparation . Although chemo-mechanical caries removal techniques do not affect the sound dentin, they are reported to be quite time consuming . Studies have also reported the inability of air abrasion to remove soft carious lesions and the inadequate preparation of cavities by sono-abrasion techniques . The drawbacks of these methods have led to the discovery of an alternative method. A polymer prototype bur with a slightly lower mechanical property than that of sound dentin was developed by Boston et al . Compared with the conventional burs, which are made from metal, polymer burs have a cutting element made of softer polyamide polymer material . This helps in minimal cutting of the dentinal tubules, leading to a minimally invasive caries excavation . The polymer bur has shovel-like cutting edges instead of spiral ones, like traditional burs . It has a Knoop hardness of 50, making it harder than carious dentin (0–30 Knoop hardness) but softer than sound dentin (70–90 Knoop hardness). The polymer cutting edge wears off when it comes in contact with harder structures, like sound tooth, and could become blunt . Thus, the infected dentin is removed, whereas affected dentin is preserved . Commercially available forms of polymer burs, such as SmartPrep SS White burs (Lakewood, NJ, USA), have polyether-ketone-ketone polymer . Generally, dental procedures are thought to cause discomfort, and this belief is especially strong in children . Dental fear indeed heightens the perception of pain in children. The triggers of dental fear include noises, vibrations, administration of local anesthesia, and the use of high-speed handpieces . Traditional mechanical cavity preparation frequently causes discomfort due to its non-conservative approach . Hence, quicker measures using less pain stimulation has to be included while treating them . Although there are studies reporting the caries removal efficiency and time consumption by polymer burs, there are insufficient data reflecting the behavioral aspects of children while using these burs. Hence, the present study aimed to compare outcomes from dental restoration for caries using smart burs and carbide burs in primary molars among 40 children aged between 6 and 12 years. This study also assessed factors including time taken for caries removal, efficiency of caries removal, intensity of pain, and patient satisfaction to determine the clinical success of each method. Research Design, Ethical Approval, and Informed Consent This single-blinded randomized clinical study received approval from the Standing Committee for Sabbatical Leaves, Publication and Research Ethics, Jazan University (HAPO-10-Z-001) with reference number (REC-45/10/1069), dated April 28, 2024. The detailed study protocol, encompassing minute details, was elucidated to the participants’ parents or guardians, and enrollment proceeded upon acquisition of a signed informed consent form. Consent was obtained before the start of the study. The study protocol followed the ethical principles of the Declaration of Helsinki (1964) and its subsequent revisions rigorously. Sample Size Estimation Sample size was estimated based on the given formula, with 1 and 0.65 being the proportion of caries lesions completely removed using carbide burs and smart burs, respectively, and a clinically significant difference of 0.35 derived from the earlier study . Sample size ( n ) = [ ( Z α / 2 + Z β ) 2 × { ( p 1 ( 1 - p 1 ) + ( p 2 ( 1 - p 2 ) } ] ( p 1 - p 2 ) 2 Where, Z (1-α/2) = 1.96 for 95% confidence interval Z 1-β = 0.84 for 80% power p1 = 1 (proportion of the carious lesions completely removed using carbide burs) p2 = 0.65 (proportion of the carious lesions completely removed using smart burs) p1–p2 = 0.35 (clinically significant difference) By substituting these values, the sample size was estimated to be 15 teeth in each group. The final sample size used in the study was 20 teeth in each group. Participant Selection Inclusion Criteria The study initially included 40 children aged 6 to 12 years (average age of 8.5±1.05 years) who visited the Pediatric Dental Department’s outpatient section. The criteria for inclusion required that the children were in good physical and mental health, with no significant medical history. Additionally, they needed to exhibit positive or definitely positive behavior as determined by Wright’s modification of the Frankl behavior rating scale (FBRS) during the initial assessment. Another key inclusion criterion was the presence of a minimum of 2 asymptomatic carious lesions in primary molars, with visible dentin involvement in primary molars fulfilling International Caries Detection and Assessment System code 4 , and, using radiographs, fulfilling International Caries Classification and Management System RB4 (radiolucency reaching the middle one-third of dentin) . Lastly, the study participants’ parents provided written consent and expressed willingness to take part in the study. Exclusion Criteria Children under 7 years old, those exhibiting symptoms of irreversible pulpitis and dentoalveolar abscess, children who showed negative or definitely negative behavior based on Wright’s modification of FBRS at the first examination, and children with medical or mental health issues were not included in the study. Randomization and Allocation Concealment Thirty children requiring restorative treatment were randomly assigned to 1 of 2 groups using a simple randomization method with a 1: 1 allocation ratio. The allocation concealment was achieved through the SNOSE (sequentially numbered, opaque, sealed envelopes) method. Envelopes of matching size and color were placed in a box labelled A and B. Participants selected an envelope and revealed the corresponding label. Group 1, the carbide bur group , consisted of 20 children who underwent dental caries removal using a carbide conventional rotary bur. Group 2, the smart bur group , consisted of 20 children who received dental caries removal with Smart burs. Intervention Procedure A single operator performed the entire restoration process for all participants in the study, to minimize any operator-related bias. Treatment-specific equipment and procedures were introduced and demonstrated to the participants using the “tell-show-do” approach. None of the patients in either group received anesthesia. The tooth in question was isolated. Caries-detecting dye was applied using an applicator tip and then rinsed with water. The affected area was treated with a high-speed handpiece with carbide bur to excavate the caries. The removal of caries was confirmed using a caries detector. Following caries removal, the caries detector was applied for 1 min to the remaining lesion. The area was then rinsed with water, and the effectiveness was assessed using the Ericson scale , as follows: 0, caries removed completely; 1, caries present in the base of the cavity; 2, caries present in base and/or wall; 3, caries present in base and/or 2 walls; 4, caries present in base and/or more than 2 walls; and 5, caries present in base, walls, and margins of the cavity. Any remaining caries if present were removed, and permanent restoration with glass ionomer cement was completed. When compared with the carbide bur group, the smart bur group burs showed wearing off the cutting edges when the bur came in contact with affected dentin . Outcomes The time taken in each method was measured and documented from the beginning of caries removal until the cavity was confirmed to be caries-free, using a stopwatch. Pain levels were assessed using the Face Leg Activity Cry Consolability (FLACC) and Wong-Baker FACES pain rating scale (WBS) , and behavior of the child through the Frankl behavior rating scale (FBRS) . Face Leg Activity Cry Consolability Scale The FLACC scale was used, as it a reliable method for objective pain assessment. The FLACC focuses on 5 different behavioral domains to determine pain severity. Facial expressions, including grimacing and frowning, are observed to determine the presence of pain. Leg movements or tightness are examined to detect signs of agitation or stress. This part involves assessing an individual’s overall physical mobility, which includes monitoring for indicators of restlessness or reluctance to remain still. Vocal expressions of distress, such as crying or vocalizations, are also considered. Consolability assesses an individual’s ability to provide comfort or solace. Each domain receives a score ranging from 0 to 2, with 0 indicating no pain and 2 the most severe pain. The total FLACC score is the sum of individual domain scores, ranging from 0 to 10. A higher score indicates more severe pain. Wong-Baker FACES Pain Rating Scale The WBS is a tool for assessing subjective pain, which features a range of 6 facial expressions. This scale consists of 6 distinct face expressions. Each expression corresponds to a numerical value from 0 to 10, indicating the intensity of pain. Pain levels are categorized based on the scores: 0 to 4 indicates mild pain, 4 to 6 indicates moderate pain, and 8 to 10 indicates severe pain. The scores were used to classify the pain levels. Both groups of children were instructed to assess their pain levels using WBS at 3 specific instances: during excavation of caries, during restoration, and at the end of the treatment. Frankl Behavior Rating Scale Wright’s adaptation of the FBRS was used to evaluate a child’s behavior at different phases throughout the dental treatment. The FBRS scale is esteemed for its methodical evaluation of a child’s cooperation and response throughout dental procedures. The children’s behavior was evaluated during several stages of the dental treatment, including intraoral examination, radiographic imaging, excavation of caries, during restoration of tooth, and after the restoration of tooth. Statistical Analysis Statistical analysis was done using a standard statistical software (SPSS 20, IBM Corp, Armonk, NY, USA). Data normality was checked with the Shapiro-Wilk test. Group allocation based on age and sex was examined using the chi-square test. Intergroup comparisons of time taken for caries removal was conducted using unpaired t test, and comparison of caries removal efficiency, patient behavior, intensity of pain between the 2 groups were conducted using the Mann-Whitney U test. This single-blinded randomized clinical study received approval from the Standing Committee for Sabbatical Leaves, Publication and Research Ethics, Jazan University (HAPO-10-Z-001) with reference number (REC-45/10/1069), dated April 28, 2024. The detailed study protocol, encompassing minute details, was elucidated to the participants’ parents or guardians, and enrollment proceeded upon acquisition of a signed informed consent form. Consent was obtained before the start of the study. The study protocol followed the ethical principles of the Declaration of Helsinki (1964) and its subsequent revisions rigorously. Sample size was estimated based on the given formula, with 1 and 0.65 being the proportion of caries lesions completely removed using carbide burs and smart burs, respectively, and a clinically significant difference of 0.35 derived from the earlier study . Sample size ( n ) = [ ( Z α / 2 + Z β ) 2 × { ( p 1 ( 1 - p 1 ) + ( p 2 ( 1 - p 2 ) } ] ( p 1 - p 2 ) 2 Where, Z (1-α/2) = 1.96 for 95% confidence interval Z 1-β = 0.84 for 80% power p1 = 1 (proportion of the carious lesions completely removed using carbide burs) p2 = 0.65 (proportion of the carious lesions completely removed using smart burs) p1–p2 = 0.35 (clinically significant difference) By substituting these values, the sample size was estimated to be 15 teeth in each group. The final sample size used in the study was 20 teeth in each group. Inclusion Criteria The study initially included 40 children aged 6 to 12 years (average age of 8.5±1.05 years) who visited the Pediatric Dental Department’s outpatient section. The criteria for inclusion required that the children were in good physical and mental health, with no significant medical history. Additionally, they needed to exhibit positive or definitely positive behavior as determined by Wright’s modification of the Frankl behavior rating scale (FBRS) during the initial assessment. Another key inclusion criterion was the presence of a minimum of 2 asymptomatic carious lesions in primary molars, with visible dentin involvement in primary molars fulfilling International Caries Detection and Assessment System code 4 , and, using radiographs, fulfilling International Caries Classification and Management System RB4 (radiolucency reaching the middle one-third of dentin) . Lastly, the study participants’ parents provided written consent and expressed willingness to take part in the study. Exclusion Criteria Children under 7 years old, those exhibiting symptoms of irreversible pulpitis and dentoalveolar abscess, children who showed negative or definitely negative behavior based on Wright’s modification of FBRS at the first examination, and children with medical or mental health issues were not included in the study. The study initially included 40 children aged 6 to 12 years (average age of 8.5±1.05 years) who visited the Pediatric Dental Department’s outpatient section. The criteria for inclusion required that the children were in good physical and mental health, with no significant medical history. Additionally, they needed to exhibit positive or definitely positive behavior as determined by Wright’s modification of the Frankl behavior rating scale (FBRS) during the initial assessment. Another key inclusion criterion was the presence of a minimum of 2 asymptomatic carious lesions in primary molars, with visible dentin involvement in primary molars fulfilling International Caries Detection and Assessment System code 4 , and, using radiographs, fulfilling International Caries Classification and Management System RB4 (radiolucency reaching the middle one-third of dentin) . Lastly, the study participants’ parents provided written consent and expressed willingness to take part in the study. Children under 7 years old, those exhibiting symptoms of irreversible pulpitis and dentoalveolar abscess, children who showed negative or definitely negative behavior based on Wright’s modification of FBRS at the first examination, and children with medical or mental health issues were not included in the study. ) Thirty children requiring restorative treatment were randomly assigned to 1 of 2 groups using a simple randomization method with a 1: 1 allocation ratio. The allocation concealment was achieved through the SNOSE (sequentially numbered, opaque, sealed envelopes) method. Envelopes of matching size and color were placed in a box labelled A and B. Participants selected an envelope and revealed the corresponding label. Group 1, the carbide bur group , consisted of 20 children who underwent dental caries removal using a carbide conventional rotary bur. Group 2, the smart bur group , consisted of 20 children who received dental caries removal with Smart burs. ) A single operator performed the entire restoration process for all participants in the study, to minimize any operator-related bias. Treatment-specific equipment and procedures were introduced and demonstrated to the participants using the “tell-show-do” approach. None of the patients in either group received anesthesia. The tooth in question was isolated. Caries-detecting dye was applied using an applicator tip and then rinsed with water. The affected area was treated with a high-speed handpiece with carbide bur to excavate the caries. The removal of caries was confirmed using a caries detector. Following caries removal, the caries detector was applied for 1 min to the remaining lesion. The area was then rinsed with water, and the effectiveness was assessed using the Ericson scale , as follows: 0, caries removed completely; 1, caries present in the base of the cavity; 2, caries present in base and/or wall; 3, caries present in base and/or 2 walls; 4, caries present in base and/or more than 2 walls; and 5, caries present in base, walls, and margins of the cavity. Any remaining caries if present were removed, and permanent restoration with glass ionomer cement was completed. When compared with the carbide bur group, the smart bur group burs showed wearing off the cutting edges when the bur came in contact with affected dentin . The time taken in each method was measured and documented from the beginning of caries removal until the cavity was confirmed to be caries-free, using a stopwatch. Pain levels were assessed using the Face Leg Activity Cry Consolability (FLACC) and Wong-Baker FACES pain rating scale (WBS) , and behavior of the child through the Frankl behavior rating scale (FBRS) . Face Leg Activity Cry Consolability Scale The FLACC scale was used, as it a reliable method for objective pain assessment. The FLACC focuses on 5 different behavioral domains to determine pain severity. Facial expressions, including grimacing and frowning, are observed to determine the presence of pain. Leg movements or tightness are examined to detect signs of agitation or stress. This part involves assessing an individual’s overall physical mobility, which includes monitoring for indicators of restlessness or reluctance to remain still. Vocal expressions of distress, such as crying or vocalizations, are also considered. Consolability assesses an individual’s ability to provide comfort or solace. Each domain receives a score ranging from 0 to 2, with 0 indicating no pain and 2 the most severe pain. The total FLACC score is the sum of individual domain scores, ranging from 0 to 10. A higher score indicates more severe pain. Wong-Baker FACES Pain Rating Scale The WBS is a tool for assessing subjective pain, which features a range of 6 facial expressions. This scale consists of 6 distinct face expressions. Each expression corresponds to a numerical value from 0 to 10, indicating the intensity of pain. Pain levels are categorized based on the scores: 0 to 4 indicates mild pain, 4 to 6 indicates moderate pain, and 8 to 10 indicates severe pain. The scores were used to classify the pain levels. Both groups of children were instructed to assess their pain levels using WBS at 3 specific instances: during excavation of caries, during restoration, and at the end of the treatment. Frankl Behavior Rating Scale Wright’s adaptation of the FBRS was used to evaluate a child’s behavior at different phases throughout the dental treatment. The FBRS scale is esteemed for its methodical evaluation of a child’s cooperation and response throughout dental procedures. The children’s behavior was evaluated during several stages of the dental treatment, including intraoral examination, radiographic imaging, excavation of caries, during restoration of tooth, and after the restoration of tooth. The FLACC scale was used, as it a reliable method for objective pain assessment. The FLACC focuses on 5 different behavioral domains to determine pain severity. Facial expressions, including grimacing and frowning, are observed to determine the presence of pain. Leg movements or tightness are examined to detect signs of agitation or stress. This part involves assessing an individual’s overall physical mobility, which includes monitoring for indicators of restlessness or reluctance to remain still. Vocal expressions of distress, such as crying or vocalizations, are also considered. Consolability assesses an individual’s ability to provide comfort or solace. Each domain receives a score ranging from 0 to 2, with 0 indicating no pain and 2 the most severe pain. The total FLACC score is the sum of individual domain scores, ranging from 0 to 10. A higher score indicates more severe pain. The WBS is a tool for assessing subjective pain, which features a range of 6 facial expressions. This scale consists of 6 distinct face expressions. Each expression corresponds to a numerical value from 0 to 10, indicating the intensity of pain. Pain levels are categorized based on the scores: 0 to 4 indicates mild pain, 4 to 6 indicates moderate pain, and 8 to 10 indicates severe pain. The scores were used to classify the pain levels. Both groups of children were instructed to assess their pain levels using WBS at 3 specific instances: during excavation of caries, during restoration, and at the end of the treatment. Wright’s adaptation of the FBRS was used to evaluate a child’s behavior at different phases throughout the dental treatment. The FBRS scale is esteemed for its methodical evaluation of a child’s cooperation and response throughout dental procedures. The children’s behavior was evaluated during several stages of the dental treatment, including intraoral examination, radiographic imaging, excavation of caries, during restoration of tooth, and after the restoration of tooth. Statistical analysis was done using a standard statistical software (SPSS 20, IBM Corp, Armonk, NY, USA). Data normality was checked with the Shapiro-Wilk test. Group allocation based on age and sex was examined using the chi-square test. Intergroup comparisons of time taken for caries removal was conducted using unpaired t test, and comparison of caries removal efficiency, patient behavior, intensity of pain between the 2 groups were conducted using the Mann-Whitney U test. The distribution of participants into 2 groups based on age and sex is outlined in . Time Consumption for Excavation A very high statistically significant difference (P <0.001) was noted for the time needed for excavation of caries, with smart burs taking a longer time (5.2±1.16 min) than carbide burs (2.74±0.91 min), as shown in . Efficency of Caries Removal On analyzing the ordinal parameters, a significant difference was observed for the Ericson scale, with the lowest scores being reported in the carbide bur group [0(0, 0.25)], as shown in . Pain Perception During Procedure During excavation of caries and restoration, most of the participants in the carbide bur group [6(6, 8), 2(2, 2.5)] had higher WBS scores than did the smart bur group [2(2, 4), 0(0, 0.5), and the differences were highly significant ( P <0.001). Evaluation of FLACC scores during local anesthesia delivery revealed a very high level of statistical significance ( P <0.001) between the 2 groups, with a higher median score recorded in the carbide bur [4.5(2, 5.25)] group, as shown in . Patient Behavior During Procedure A statistically significant difference was observed for FBRS scores before excavation in the smart bur group, with a highest median of “4” reported in the mandibular arch, whereas a statistically significant difference in Ericson scores was observed in the carbide group, with a higher score seen in relation to the maxillary arch, as shown in and . A very high statistically significant difference (P <0.001) was noted for the time needed for excavation of caries, with smart burs taking a longer time (5.2±1.16 min) than carbide burs (2.74±0.91 min), as shown in . On analyzing the ordinal parameters, a significant difference was observed for the Ericson scale, with the lowest scores being reported in the carbide bur group [0(0, 0.25)], as shown in . During excavation of caries and restoration, most of the participants in the carbide bur group [6(6, 8), 2(2, 2.5)] had higher WBS scores than did the smart bur group [2(2, 4), 0(0, 0.5), and the differences were highly significant ( P <0.001). Evaluation of FLACC scores during local anesthesia delivery revealed a very high level of statistical significance ( P <0.001) between the 2 groups, with a higher median score recorded in the carbide bur [4.5(2, 5.25)] group, as shown in . A statistically significant difference was observed for FBRS scores before excavation in the smart bur group, with a highest median of “4” reported in the mandibular arch, whereas a statistically significant difference in Ericson scores was observed in the carbide group, with a higher score seen in relation to the maxillary arch, as shown in and . Children present a spectrum of behavior in the dental office . Some are forbearing and robust during the dental procedures and are unlikely to show uncooperative behavior, while others are vulnerable to such situations and require more attention for their cooperation . Dealing with anxiety and fear in children during dental appointments has been a significant challenge . Various strategies have been suggested to help control and alleviate anxiety when visiting the dentist. Effective pain control is essential in reducing anxiety and enhancing cooperation in children during dental procedures .The time needed for dental procedures and the process of caries excavation can significantly affect children’s behavior, leading to increased anxiety and fear of the dentist . The adoption of a self-limiting and a painless technique for caries excavation has become a focal point of interest, particularly in the realm of pediatric dentistry. The response of the pulp and the pain related to it are strongly influenced by factors including the thickness of the remaining dentin . Therefore, methods combining the conservative and painless approach in caries removal plays a major role in pain management in children. One such innovative design inculcating these properties are smart burs. Therefore, in this study, we aimed to assess the clinical effectiveness and cooperation of children during restoration using smart burs, compared with conventional carbide burs. Researchers have used various methods to assess anxiety among children . This includes psychometric methods, physiological techniques, and projective techniques. Projective techniques require children to make comments on some pictures and thereby enable researchers to identify the hidden feelings of the child . In the present study, the modified WBS was used for subjective assessment of pain scores. Excavation and restoration activities in this study demonstrated a notably high statistically significant difference using this scale. Median pain scores were found to be significantly lower while using smart burs than when using carbide burs. This was in accordance with the previous study conducted by Thomas et al , in which lower pain scores during caries excavation and restoration using smart burs was observed. This served as an indirect indicator of the child’s receptiveness to the use of the smart bur. The lower pain perception by the children while treated with smart burs could be due to lower stimulation of dentinal tubules by the smart burs. Previous studies have indicated a preference for polymer burs over other methods for caries excavation . Various pain perception scales, such as the visual analog scale, facial image scales, and questionnaires, have been used in these studies . Comparison was made between ceramic burs and diamond abrasive round burs as alternative caries excavation techniques . The rotary burs used were highly abrasive and could have led to the inadvertent removal of affected dentin. As a result, patients might have experienced discomfort and pain during the procedures, contributing to the preference for polymer burs in these studies . The FLACC Scale was used in the present study as another method to determine pain, by assessing facial expressions and observable behaviors . It was noted that the median pain scores were lower in the smart bur group than in the carbide bur group. Conversely, Goyal et al found that smart burs were less preferred than ceramic and diamond burs, as measured by using the same rating scale . The researchers suggested that the lower acceptance of polymer burs could be due to the increased time needed and noise generated during cavity preparation when using these burs . Assessment of the child’s cooperation during the procedure was conducted using the FBRS in the present study, which is a reliable tool for evaluating children’s behavior in dental settings . Although we did not obtain any statistical significance in the present study, children treated with smart burs exhibited higher scores, representing a positive child. The discomfort experienced by the children while using carbide burs could be attributed to the removal of the affected dentin, along with the odontoblastic reaction zone plugs . This resulted in the exposure of the permeable sound dentin. In contrast, polymer burs made little changes to the affected dentin, which increased the overall patient satisfaction . Another noteworthy finding in our study was the lower caries removal efficiency found among children treated with smart burs, compared with that of those treated with carbide burs, where complete removal of caries was seen. Prabhakar et al in their study observed that polymer burs were more effective in removing soft carious lesions than firm lesions. As per the observations of Dammaschke et al , the reduced efficacy of smart burs in caries removal can be due to the self-limiting design of the bur. Hence, the impacted dentin remains, while the diseased dentin is eliminated . The position of a tooth in the dental arch also plays an important role in caries removal efficiency; a higher Ericson score was seen in relation to the maxillary arch in the carbide bur group in the present study. According to Divya et al, the polymer burs were found to be more effective in maintaining dentinal tubules with minimal damage as opposed to traditional burs . The time required for cavity preparation using smart burs was greater than that needed for carbide burs in the present study. El Baz et al noted that the disparity in time is due to the faster speed of carbide burs in comparison to that of smart burs . The reason for the extended time taken, as explained by Vijay et al , was the requirement to substitute the worn smart burs during excavation. This result agreed with that of previous studies . Dammaschke et al reported no difference in cavity preparation time between smart prep burs and carbide burs . More time consumption by the smart bur could be attributed to some of its properties, for instance, the softer polymer material used to design the bur, direction of caries removal by the bur (working from the center and top of the lesion progressing outward and downward, thereby removing the lesion layer by layer), lower rotational speed of the bur, and self-limiting property of the bur . Polymer burs are an innovative design for efficient and precision-focused cavity preparation. The efficacy of these burs in selectively eliminating the infected dentin while minimizing disruption to the deeper affected dentin near the pulp is crucial for patient comfort during cavity preparation . Using an instrument with restricted cutting ability in the affected dentin supports the creation of adhesive cavity designs in minimally invasive dentistry . These polymer burs are thus revolutionizing the operative management of dental caries by reducing thermal impact, providing better precision, and improving patient comfort. Alongside the merits of the polymer burs, we identified limitations while using the bur, including its difficulty in small caries excavations and its going blunt when enamel was touched during caries excavation. Although time consumption is a factor, overall patient comfort has to be valued . We used both the FLACC scale, assessed by healthcare professionals, and the WBS, self-reported by the child, to provide a comprehensive evaluation of the pain acceptance by the child, which can be considered a major strength of this study. Moreover, the design of the study and the standardization used adds to the robustness of this study. Limitations One potential limitation of this study was its narrow scope in comparing smart burs with only carbide burs, whereas incorporating other minimally invasive methods could have provided a more comprehensive analysis. Additionally, the study focused on a specific age group of children. We suggest that future research should explore the acceptability of smart burs among a wider pediatric population. Owing to the benefits and drawbacks identified in our study, we concluded that smart burs can be a promising tool for restorations in children. It serves as an exceptional instrument for dentinal caries removal, with its unique ability of removing infected dentin while preserving the deeper affected dentin. This approach minimizes pain and enhances cooperation among pediatric patients. One potential limitation of this study was its narrow scope in comparing smart burs with only carbide burs, whereas incorporating other minimally invasive methods could have provided a more comprehensive analysis. Additionally, the study focused on a specific age group of children. We suggest that future research should explore the acceptability of smart burs among a wider pediatric population. Owing to the benefits and drawbacks identified in our study, we concluded that smart burs can be a promising tool for restorations in children. It serves as an exceptional instrument for dentinal caries removal, with its unique ability of removing infected dentin while preserving the deeper affected dentin. This approach minimizes pain and enhances cooperation among pediatric patients. Pain perception among children was lower and overall satisfaction was higher in the smart bur group, whereas caries removal efficiency was higher in the conventional carbide bur group. Therefore, we conclude that restoration using smart burs minimizes pain and enhances cooperation of pediatric patients. |
Digital health delivery in respiratory medicine: adjunct, replacement or cause for division? | 7726c016-586c-4846-bafc-64f2fc9f7fd2 | 11423130 | Internal Medicine[mh] | “AI will never replace physicians – but physicians who use AI will replace those who don't” Jesse Ehrenfeld, American Medical Association President, July 2023 Digital medicine is an umbrella term used to describe innovations in healthcare enabled by a variety of digital technologies. There are three broad categories of digital medicine, namely connected health, eHealth and precision medicine. Connected health involves the collection and analysis of continuously recorded remotely monitored physiological data and health behaviours . These data provide granular information on an individual's health status . eHealth refers to the use of information technology to support healthcare delivery; examples of its use include digital pharmacy services and platforms such as the electronic patient health record . Precision medicine involves advanced statistical analysis of genetic, clinical, behavioural and physiological data, which is used to gain unique person-specific diagnostic, treatment and prognostic insights . The opportunities for these new tools in the field of respiratory medicine are immense. For example, personalised treatments for lung cancer are increasingly being recommended for individuals based on data obtained from advanced statistical analysis of digitally formatted molecular and genomic data . Even in common clinical conditions such as COPD, patients have many co-existing health conditions as well as individual health behaviours . This means that to understand an individual's symptoms, such as breathlessness, remote monitoring tools are required. These can help to provide insights into the many potential causes, such as levels of exercise, medication adherence and spirometry . Outside of the potential application of digital medicine to improve direct clinical care, digital technologies offer huge opportunities for the delivery of healthcare. Electronic health records (EHRs) are an example of this aspect of digital medicine. The development of artificial intelligence (AI) tools such as large language models (LLMs) and chatbot assistants designed specifically to perform routine tasks are opportunities for digital technologies to improve the clinician's workflow . Despite their potential, there are many important limitations to these technologies. Central to these are privacy concerns and the risk of systemic bias. Sharing health data with agents outside of the care team, as well as data theft from security breaches, are real areas for privacy concern . Remote monitoring digital tools provide important insights into social and human behaviour; however, they can also be considered intrusive and thus also raise privacy concerns . The potential for social divide, with those with poor access to the internet being left behind, and the inherent racial bias in certain AI models, are major concerns with regard to perpetuating systemic bias as the use of digital health tools expands . Reliability, consistency and translatability are also key concerns to be addressed before any AI tool can be widely distributed. AI utilises complex statistical functions to apply advanced algorithms to health data in an attempt to mimic human clinical reasoning. Different clinical settings can influence AI outputs, leading to inconsistencies and challenges in reliably reproducing or translating them across other healthcare settings . Healthcare delivery has become fragmented as patient care is shared and delivered by a combination of increasingly sub-specialised physicians, primary care physicians, emergency departments and out-of-hours urgent care services . This fragmentation emphasises the need for shared, objectively collected health data. The recent experience of telemedicine, which at first seemed so convenient but has not been persistently adopted because it failed to address the need for in-person interactions with clinicians, illustrates the fundamental limitation of digital medicine. Central to the adoption of these innovations will be how they impact the interactions between clinicians and patients. In this article, we outline digital developments in respiratory medicine to show the opportunities as well as the threats of this new field of medicine and discuss how these developments may be incorporated into clinical practice. The focus of this review is on how innovations in these areas may alter how healthcare is delivered by respiratory clinicians under the three broad themes of connected health, digital information technologies and precision medicine. A PubMed search of respiratory medicine airways, sleep, radiology, machine learning, AI and digital technology was performed to support this review. A glossary of some of the more frequently used methods referred to in this article is listed in . As one of the best recognised examples of digital medicine, telemedicine serves as an alternative to traditional in-person clinic visits . There are many cases in respiratory medicine where telemedicine has developed into a sustained and practical way of delivering healthcare. For example, telemedicine is well suited to manage patients with sleep and ventilation disorders. Using remotely monitored data from ventilation devices and connecting virtually with patients most in need of care rather than routine in-person clinic visits makes for a more efficient delivery of care . Similarly, pulmonary rehabilitation can be successfully delivered online, leading to a wider participation by patients who might otherwise not be able to travel to in-person classes . These examples address some challenges associated with traditional in-person clinics, notably time constraints and patient convenience. The approach also leads to enhanced patient retention and reduced carbon emissions by minimising travel requirements . Patients have high rates of reported satisfaction with telemedicine delivered care, demonstrating that they are not only practical but well-received . However, clinician enthusiasm for telemedicine has significantly declined since the coronavirus 2019 (COVID-19) pandemic, when its use flourished . Some of this waning enthusiasm may reflect the difficulty in financial reimbursement, as well as a return to the “old ways”, wherein clinicians feel that they deliver better care in person providing the “human touch”. Clinician concerns include the lack of same-day diagnostics, language barriers, cultural differences and technological issues, all of which can hinder communication when done remotely . As an adjunct to standard clinical care, telemedicine is a viable alternative for rural populations, as well as some racial and ethnic minorities, alongside other historically underserved communities. For these groups it will not replace current care models, rather it will just serve as an adjunct to support care. Emerging during the COVID-19 pandemic, virtual wards were developed to monitor patients at home and thus avoid unnecessary hospitalisation . One example during the COVID-19 pandemic was their use in remotely monitoring patients who may have required escalation or hospital admission for “silent hypoxia” in the absence of associated breathlessness . Another is home sleep apnoea testing, which, when used appropriately in selected populations, increases convenience and accessibility while reducing waiting times for appointments, diagnosis and treatment initiation . Home wards have the potential to worsen socioeconomic division, as they require a patient to have a suitable infrastructure to remain at home. This requirement means that socially or economically disadvantaged groups may not be able to utilise these services, meaning that this is not a model that can be adopted globally in its current form, despite the potential for cost saving and patient convenience . Digital therapeutics (DTx) deliver medical interventions directly to patients using evidence-based, clinically evaluated software aimed at treating and preventing a broad spectrum of diseases and disorders . DTx are increasingly being used in respiratory medicine for conditions such as smoking cessation . Virtual cognitive behavioural therapy platforms, accessible online or via app-based programmes, are now recommended by National Institute for Health and Care Excellence guidelines for the treatment of insomnia . Digital therapeutics for dysfunctional breathing, a common yet debilitating condition, are also in development . A major threat of these digital therapeutics is from social media apps and influencers which might spread misinformation and unproven therapies through unregulated applications . However, on balance, regulated digital therapeutics offer tailored treatments for benign chronic conditions and are a welcome suitable replacement for clinic activities and reduce the burden on services. Wearable devices, such as smart watches, which collect biometric data including oxygen saturation, heart rate variability and sleep patterns are used in telemedicine, as an adjunct or replacement for some laboratory tests . Examples in respiratory medicine include bespoke devices used to screen for sleep apnoea and home spirometry, with data stored on person-specific platforms for monitoring patients with idiopathic pulmonary fibrosis, cystic fibrosis and following lung transplant . Digital inhalers, used to monitor adherence and technique, when paired with lung function, have been shown to reduce the need for biologic add-on therapy in asthma and subsequently reduce the cost to the healthcare system (with the financial effect of different rates of add-on therapy estimated to reduce costs by €3000 per patient per year or a lifetime saving of over €60 000) . Telemedicine, virtual wards, digital therapeutics and remote-monitoring technologies have all developed through the innovation cycle from initial enthusiasm, through a phase of disappointment and are emerging with better identified adjunct roles supporting aspects of healthcare. provides further examples of the use of digital health in optimisation of routine care in respiratory medicine. The EHR is an underappreciated but essential tool of digital medicine. Thought to be essential for patient safety , the potential benefits of EHRs are also clear from a research perspective. The landmark COVID-19 RECOVERY trial demonstrated the value of the EHR. Baseline demographics, patient randomisation and online follow-up were all collected from the EHRs and this was pivotal to the success of the trial that randomised over 6000 patients to the study in a 3-month period . On the other hand, EHRs impose significant burdens and frustrations among clinicians and have been linked with clinician burnout . Responding to this frustration, AI-powered algorithms that streamline data entry and centralise documentation of patient care are being developed. Examples include natural language processing models, which process and analyse free text and have been shown to increase data accuracy and reduce human error in medical record keeping . GPT-4 and other bespoke LLMs are increasingly being used to create structured “pseudo-personalised” documentation, such as letters to patients explaining results and summarising data for other healthcare professionals . Future uses include opportunities to incorporate data from remote monitoring devices into LLM-enabled chatbots, which would deliver autonomous monitored care. In short, novel AI technologies have the potential to move EHRs from foe to the embodiment of digital medicine. Major areas of concern still persist regarding data privacy and security; when breaches occur, they can be devastating, with both loss of clinical data and trust in the providers . Despite these concerns, there are clear opportunities for AI to change how the EHR serves clinical care. One of the key features of AI that makes it suitable for clinical applications is its ability to recognise patterns. A glossary of some of the more frequently used AI methods referred to in this article are listed in . Some cases are illustrated to show the potential of these technologies, the steps required before broad adoption and the need to retain the patient–clinician relationship. In chest radiology, AI, in particular using deep-learning techniques, can identify a variety of conditions, such as interstitial lung disease, pneumothorax, cystic lesions and pulmonary nodules, in particular those most likely to be malignant . Utilising AI to detect these conditions has the potential to improve diagnostic accuracy through reducing human error as well as supporting nonspecialist centres to deliver care at the same standard as advanced ones. Hampering their widespread clinical deployment are some important practical issues. For example, in the case of the radiologic assessment of pneumothorax, the presence of a chest drain effects the accuracy of the AI model . Recognition of such artefacts has led researchers to realise that there needs to be a robust system of testing with a “human in the loop” . In other words, before AI-enabled technology is made commercially available, robust testing, validation and certification are needed and, once used in practice, repeated re-audits are required. Sleep medicine is ideally placed to benefit from these technologies and deliver more personalised diagnosis and treatment. Data-driven machine-learning algorithms have been shown to be effective through precision-based patient stratification to identify where patient testing should be performed . Further development of these technologies and analysis of data from smartwatches and other wearable devices could lead to the automation of the diagnostic and therapeutic assessment of patients with sleep apnoea. It is not difficult to imagine an app deployed on a phone's operating system that, when linked to a smart watch, detects that the wearer has sleep apnoea, recommends a continuous positive airway pressure provider, monitors the effectiveness of therapy and provides practical information on mask fitting and adherence. The entire management could thus be provided with no clinician input. The regulatory and indemnity risks to manufacturers of these remote monitoring devices and software are large and potentially unavoidable barriers to the complete replacement of clinicians. However, the ethical concern of not being able to attend to the long waiting lists of people with sleepiness is an important issue to consider when evaluating the threat to clinician livelihood of fully automated systems. AI models that predict sepsis and acute renal failure that outperform rule-based models have already been developed . The first-generation models had low specificity (many false positives), which affected their accuracy and practical usability . One reason for their low specificity was that the input data used to train the models was derived from retrospective EHR data . Missing data or imprecise recording of the timing of events, such as when blood tests were taken or the precise times that clinical notes refer to, significantly impacted the model's performance . Such limitations could be overcome by prospective studies where data is comprehensively collected. In time, we foresee that AI-informed clinical decision aids will replace generic guidelines and rules of thumb for clinical decision-making. While the above-mentioned models may replace guidelines and assist clinicians in detecting potential risks, it will be a good deal more challenging for the models to develop to the point where they can predict when an event will occur . It will also be a challenge for them to suggest a treatment that would be better than that suggested by experts, particularly where determining causality is required . In addition to what the models do, the next issue will be whether a model applies to the individual being treated. Models developed in one group of patients may not transfer to another. Transparency of a model in terms of analytics and algorithms is important for patient safety and to earn the trust of the treating clinician. The methodologies used in chest radiology, sleep medicine, sepsis and other fields, with examples, are demonstrated in and . While the above examples illustrate the near-term potential of digital medicine to make healthcare safer and more efficient, they focus on specific applications of particular technologies. However, the real impact of these innovations will be appreciated when all of the aspects of digital medicine are incorporated into a complete solution for a particular healthcare need. For example, a truly digital model of lung cancer screening might involve AI that identifies patients at risk of lung cancer from an EHR, with generative AI being used to send out personalised invitations to participate, with smart chatbots being used to schedule appointments and address patient concerns, while an AI-trained radiology system analyses the scan and arrange follow-up tests. How would such an intervention be received? Concerns regarding the robustness of AI algorithms in finding patients, scheduling and interpreting complex respiratory data is a legal risk. The erosion of clinical autonomy, as well as litigation fears, would interfere with the patient-centred approach of traditional doctor–patient consultations . Clinical leadership in scenarios such as this will be a critical decider of how healthcare delivery is shaped by digital technologies. The integration of the novel tools of digital health requires clinicians to see the need, clinical utility and potential downsides of these innovations. It is they who will need to adapt to new workflows and practices, but they cannot do so alone. Collaborating with different disciplines such as technology developers, engineers and information technology, with patient involvement, to design user-friendly devices will increase the likelihood of successful implementation. Regulation should focus on clinical effectiveness not simply that the system works. Privacy and protection of patient data remain of paramount importance, from both the patient and clinician perspective. Patient data remains subject to cyber-attacks , with a subsequent risk of data leaks to insurance companies, and subsequent clear cost and confidentiality implications. Privacy violations and potential data breaches of sensitive information may create a barrier to universal uptake from a patient point of view. From a patient perspective, access and affordability are also important. Patients from underserved communities may encounter problems including a lack of devices, connectivity issues and digital literacy. Access to a readily available internet service remains a major barrier. For example, it is estimated that access to mobile internet in sub-Saharan Africa remains at 40%, while availability of uncensored information is also frequently lacking globally . This has the potential to exacerbate existing disparities in healthcare. Sufficient support also needs to be widely available to those who experience technical problems or require assistance with device setup or troubleshooting. Digital devices, however convenient, are not a one size fits all and this must be considered if medical technology is to be rolled out universally across the world. Even in the acute hospital setting, concerns have been raised about medical devices used in diagnostics relying on racially based algorithms that have been linked to under-recognition and under-reporting of conditions in certain populations . Telemedicine, digital therapeutics, as well as remote monitoring have already changed many aspects of everyday care for many patients with respiratory conditions. Near-term developments will involve developing trusted platforms for patient education and engagement. The need for trusted platforms is not just because of the opportunity but because misinformation is widely spread and shared online . From a clinician perspective, leadership in devising how these technologies are designed, tested and implemented into healthcare is pivotal. Connected health in the form of telemedicine and remote digital monitoring are already established in many domains of respiratory medicine, including asthma, pulmonary rehabilitation and sleep medicine. Machine learning tools, including imaging recognition software and their use in respiratory radiology, will almost certainly become part of day-to-day practice in the coming years, once concerns around generalisability have been addressed. AI in many forms may improve administration, record keeping and other administrative tasks. It remains uncertain whether the promises of better models of personalisation and prediction will translate into clinically meaningful and cost-effective products for clinicians. As AI evolves and evidence emerges from real-world experience, clinicians will have an obligation to work with other healthcare providers and regulatory agencies to establish clinical guidelines, quality metrics and standards of care for the use of digital health in clinical settings. Collaborating with different disciplines such as technology developers, engineers and information technology along with patient involvement to design user-friendly devices will increase the likelihood of successful implementation. Healthcare providers and training bodies must provide staff and trainees with continuous training to enhance proficiency in utilising digital health technologies. Within the field of respiratory medicine, we will soon need to establish which areas are suited to virtual care models from health outcomes and the perspective of patient satisfaction. Thereafter, the patient–clinician relationship may shift dramatically and only time will tell how well this is tolerated on both sides. We cannot truly adapt these techniques until we are sure they are equitable for all and until ethical concerns regarding data protection, patient consent and socioeconomic, gender and race inequalities are overcome. |
National Eye Institute’s (NEI) coordination efforts and current opportunities for sustainability, adaptation, and climate resilience in global eye health – ARVO 2023 session commentary | a5667426-96dc-4332-a7ba-8a2962f09cb8 | 11269564 | Ophthalmology[mh] | |
Let's talk about sexuality – A web‐based survey of self‐reported competence in sexual problems among obstetrician‐gynecologists in Finland | c265f08b-a1b0-4165-b102-2c45d9f11611 | 9889325 | Gynaecology[mh] | INTRODUCTION Good sexual health is considered to be one of the cornerstones of good quality of life. , For women, obstetrician‐gynecologists (OB/GYNs) hold an important role in the assessment of sexual health issues. Sexual concerns are frequent; nevertheless, women may not bring up the topic themselves during appointments, instead often expecting the healthcare professionals to initiate the conversation. , According to a Latvian study, 80% of the women reported that they would like to be asked about sexual health issues during their gynecological visits, but only one‐third of them had that experience. In a Norwegian study, 87% of women stated that they would accept, of which 35% would like that the gynecologist ask about sexual function during an appointment. Shame and other psychoemotional barriers were considered to be the main obstacles preventing women from starting the conversation. In a Swedish study, the majority of young women having a gynecological examination reported that they had never been asked about sexual health issues (76%–99%, depending on the specific question). A British survey showed that 37% of the women in (uro)gynecology clinics had a sexual complaint, but only 17% volunteered this information, and the rest only admitted it when questioned. The American College of Obstetricians and Gynecologists has recommends that OB/GYNs should initiate a clinical discussion of sexual function during routine care visits. Likewise, the European Board and College of Obstetrics and Gynecology states that women should have the opportunity to address sexual health problems alongside matters related to contraception and general sexual health. However, for clinicians, several barriers may hinder them in bringing up the issue, the most frequently identified being the limited time in appointments and a lack of education. , Additionally, embarrassment and the absence of effective treatment options , have been found to be obstacles to discussing sexual health issues. Mixed results have been reported in the studies focusing on the association between OB/GYNs' gender and age with regard to bringing up sexual health issues as a part of their routine clinical work. , , , In some studies, female , and younger physicians were found to be more likely to ask about sexual activity compared to male and older physicians, respectively. However, in other studies, no differences concerning the OB/GYNs' gender or age , have been found. According to a meta‐analysis conducted in the United States, male OB/GYNs typically provide higher self‐ratings than do female OB/GYNs in a number of areas (training, knowledge, performance, confidence, and competence or ability). In that meta‐analysis, altogether 97 articles assessed gender differences, of which 11 articles evaluated self‐ratings. Similar findings of higher self‐reported ratings among male physicians compared to female physicians have also been reported in other specialties. , Many barriers to discussing sexual health have also been reported from the perspective of the patients. According to a Swedish study, of women with prolonged or severe dyspareunia, only 28% had consulted a physician. In another study carried out in five Anglophone countries, only 32% of men and women with sexual problems had sought medical care. The barriers to seeking medical care included the lack of bothersomeness, embarrassment, doubt about the possibility of a cure, faith in the spontaneous remission of the problem, and the fear of stigma. In addition, many patients did not think they had a medical problem. , Our study investigated OB/GYNs' self‐reported competence in discussing and treating sexual problems during appointments in Finland. The barriers related to bringing up the issue were also assessed. We hypothesized that OB/GYNs face a range of barriers that prevent them from bringing up sexual health issues, even though they treat problems closely related to patients' sexual function. The information provided by our study can be used for planning and organizing future education in sexual medicine. MATERIAL AND METHODS This study was a part of the Finnish Sexual Medicine Education (SexMEdu) study investigating the level of education in sexual medicine in Finland. The participants were among the members of The Finnish Society of Obstetrics and Gynecology, which consists mainly of OB/GYN specialists and residents. The Society had 1212 members in 2019, including both working and retired OB/GYNs. The vast majority of OB/GYNs specialists and residents belong to The Finnish Society of Obstetrics and Gynecology, as it offers annual national educational meetings throughout the year and serves as a networking platform. OB/GYN residents often join the Society already at the beginning of their training. In Finland, there were 680 specialists in OB/GYN in 2019, of which 87% were female. The Finnish Society of Obstetrics and Gynecology permitted us to send a questionnaire to members using its register of contact details. We did not have access to the actual register; instead, the Society forwarded our request to its members. A web‐based questionnaire and two reminders were sent between January 2019 and February 2020. Furthermore, an additional email was sent to chief physicians of OB/GYN in hospitals in order to improve the response rate. In the preface, it was stated that the questionnaire was directed only at OB/GYN specialists and residents. Background information included gender (female/male/other), age, education (specialist/resident), occupational status (hospital/private sector/researcher/clinical teacher/primary health/retired/other [maternal leave/leave of absence/sick leave/not currently working]/student; every responder could have several occupations), number of patients treated per day (1–10/≥11), and number of patients dealt with sexual health issues per day (0/1–5/≥6). The questionnaire was a modification of the Portuguese SEXOS study questionnaire. Permission to use the questionnaire was received from the Portuguese researchers. Translation to Finnish was carried out from the English version of the SEXOS questionnaire. This part of the study included the following four fields: (A) Self‐reported competence in discussing and treating patients with sexual problems (three separate questions), (B) Barriers to bringing up sexual problems during OB/GYNs' appointments (nine separate items), (C) Source of education in sexual medicine (two separate questions), and (D) Need for education in sexual medicine (two separate questions). The questions are presented in Table . The web‐based questionnaire was programed not to proceed in case of a missing answer, ensuring that every participant submitted a complete questionnaire. The possible duplicate questionnaires were omitted (the same gender, age, the university and the year of graduation of medical degree and the university and the year of graduation of OB/GYN specialist). 2.1 Statistical analyses Data is presented with frequencies (percentages). In the analyses, each question in fields A and B was dichotomized (A: questions 1 and 2 were “poor” or “quite poor” vs “good” or “quite good” and question 3 was “a major problem” or “a moderate problem” vs “not a problem” or “a minor problem”; B: “very much” or “much” vs “not at all” or “some”). The “cannot say” responses in field B and in question 3 in field A were omitted from the analyses. Question 2 in field C was dichotomized as “insufficient” or “quite insufficient” vs “sufficient” or “quite sufficient.” Question 1 in field C and question 2 in field D were multiple‐choice questions with several options. In the four fields of interests, (A–D), multivariable binary logistic regression was carried out with adjustment for the OB/GYNs' gender (female/male), age (28–39/40–49/50–74 years), daily number of patients treated (1–10/≥11) and daily number of patients who dealt with sexual health issues (0/1–5/≥6). In each field, each question was examined separately in the analyses. The results are presented using adjusted odds ratios (aORs) with 95% confidence intervals (CIs). p ‐values of less than 0.05 were considered statistically significant. Statistical analyses were performed using the SAS System for Windows, version 9.4 (SAS Institute Inc.). 2.2 Ethics statement The study protocol was approved by the Ethics Committee of Turku University (44/2017) on September 11, 2017. The Finnish Sexual Medicine Education (SexMEdu) study respected the Helsinki Declaration in terms of the anonymity of the participants and obtaining of informed consent. Replying to the questionnaire implied consent, which was made clear to the respondents within the questionnaire. Statistical analyses Data is presented with frequencies (percentages). In the analyses, each question in fields A and B was dichotomized (A: questions 1 and 2 were “poor” or “quite poor” vs “good” or “quite good” and question 3 was “a major problem” or “a moderate problem” vs “not a problem” or “a minor problem”; B: “very much” or “much” vs “not at all” or “some”). The “cannot say” responses in field B and in question 3 in field A were omitted from the analyses. Question 2 in field C was dichotomized as “insufficient” or “quite insufficient” vs “sufficient” or “quite sufficient.” Question 1 in field C and question 2 in field D were multiple‐choice questions with several options. In the four fields of interests, (A–D), multivariable binary logistic regression was carried out with adjustment for the OB/GYNs' gender (female/male), age (28–39/40–49/50–74 years), daily number of patients treated (1–10/≥11) and daily number of patients who dealt with sexual health issues (0/1–5/≥6). In each field, each question was examined separately in the analyses. The results are presented using adjusted odds ratios (aORs) with 95% confidence intervals (CIs). p ‐values of less than 0.05 were considered statistically significant. Statistical analyses were performed using the SAS System for Windows, version 9.4 (SAS Institute Inc.). Ethics statement The study protocol was approved by the Ethics Committee of Turku University (44/2017) on September 11, 2017. The Finnish Sexual Medicine Education (SexMEdu) study respected the Helsinki Declaration in terms of the anonymity of the participants and obtaining of informed consent. Replying to the questionnaire implied consent, which was made clear to the respondents within the questionnaire. RESULTS The survey was completed by 328 respondents, resulting in a response rate of 27%. Of these, 275 were OB/GYN specialists and 53 were residents. Eight respondents reported not working as clinicians, leading to their exclusion. In addition, there were 21 possible duplicates, which were omitted. Thus, 299 questionnaires were eligible for the analysis (Figure ). Basic characteristics of the respondents are shown in Table . The mean age of the respondents was 47.1 years (SD 11.0, range 28–74 years). The mean age of the female respondents was 46.5 years (SD 10.5, range 28–74 years) and the male respondents 55 years (SD 14.0: range 30–74 years). Of all, 214 OB/GYNs reported working in a hospital, and of these, 44% also reported working in the private the sector, 19% as researchers, 7% as clinical teachers and 3% in primary health care. Moreover, 58 (19%) OB/GYNs reported to work in the private sector only. Furthermore, 12 retired OB/GYNs reported working in the private sector after having had a career working in a hospital, which is also allowed in Finland. The results of self‐reported competence in discussing and treating patients with sexual problems are shown in Table . Most of the OB/GYNs (72%, n = 215/299) reported that their general competence in discussing sexual problems with their patients was good or quite good. However, an identical percentage (72%, n = 216/299) reported that their competence in treating patients' sexual problems was poor or quite poor. Compared to the male OB/GYNs, the female OB/GYNs were more likely to report a poor or quite poor competence treating (female 75%, n = 209/278 vs male 33%, n = 7/21) their patients' sexual problems. Additionally, there was a statistical tendency (aOR 4.41, CI: 0.95–20.36, p = 0.058) that the female OB/GYN were more likely to report a poor or quite poor competence in discussing sexual problems with their patients (female 29%, n = 82/278 vs male 10%, n = 2/21). As for age groups, the OB/GYNs in the age group of 40–49 years were less likely to report poor or quite poor competence to discuss compared to the age group of 28–39 years. No differences according to the number of patients treated daily were found. Furthermore, the more often the OB/GYNs dealt with sexual health issues with patients daily, the less they reported a poor or quite poor competence in discussing (0 patients per day dealt with sexual health issues: 43%, n = 18/41 vs 1–5 patients per day: 27%, n = 61/221 vs ≥6 patients per day: 14%, n = 5/37) and treating (0 patients per day dealt with sexual health issues: 89%, n = 36/41 vs 1–5 patients per day: 76%, n = 183/221 vs ≥6 patients per day: 32%, n = 12/37) their patients' sexual problems. If the patient brought up sexual health issues herself, almost all (98%, n = 294/299) of the OB/GYNs reported having no or only minor problems with discussing the subject, with no differences related to gender, age, or daily number of patients (no “cannot say” responses). The frequencies of the various barriers to bringing up sexual problems are presented in Table . In the entire cohort, the four most important barriers were “shortness of the appointment time”, “lack of knowledge about sexual medicine”, “lack of experience with sexual medicine”, and “sexual problem not being a priority at the appointment”. More female OB/GYNs than male OB/GYNs reported that “shortness of the appointment time”, “lack of knowledge about sexual medicine”, “lack of experience”, and “lack of effective treatment” were barriers much or very much of the time when bringing up sexual problems. Compared to the OB/GYNs in the youngest age group, fewer OB/GYNs in both of the older age groups reported barriers much or very much of the time concerning “shortness of the appointment time”, “lack of knowledge about sexual medicine”, “lack of experience with sexual medicine”, and “fear of failing to respond to patients’ sexual problems”. No differences emerged according to the number of patients treated daily. Furthermore, the OB/GYNs dealt with sexual health issues with patients less frequently were more likely to report that “shortness of the appointment time”, “sexual problem not being a priority at the appointment”, “lack of knowledge”, “lack of experience”, and “fear of failing to respond to patients” problems were barriers. The participants' sources of education in sexual medicine are presented in Figure . The most important source was medical journals (68%, n = 202/299), followed by consulting/discussing with colleagues (56%, n = 168/299), continuing medical education (CME) (50%, n = 149/299), and medical books (45%, n = 134/299). Most of the OB/GYNs (95%, n = 283/299), reported that the education in sexual medicine they received during medical school was insufficient, and 83% ( n = 248/299) considered the education in sexual medicine they received during their residency to be insufficient. CME was rated better, yet 43% ( n = 129/299) still considered it insufficient. Nearly one third of the OB/GYNs (27%, n = 81/299) reported not participating in CME related to sexual medicine at all. A vast majority of the OB/GYNs (92%, n = 276/299), reported a need for CME in sexual medicine. Here, there was a difference between the age groups: compared to the OB/GYNs in the 28–39 age group, those in the 40–49 age group were more likely to report a need for CME (aOR 15.34, 95% CI: 1.92–122.59, p = 0.010). Compared to the OB/GYNs dealt with sexual health issues with 1–5 patients daily, those OB/GYNs dealt with 0 patients daily were more likely to report a need for CME (aOR 3.34, 95% CI: 1.09–10.26, p = 0.036). No other differences emerged related to gender or daily number of patients. Those reporting a need for CME preferred to receive this education through lectures (91%, n = 249/275) and online learning platforms (58%, n = 160/275), followed by workshops (26%, n = 71/275) and simulations (16%, n = 44/275). DISCUSSION Our study is the first to survey the barriers to bringing up sexual problems in OB/GYNs' appointments in Finland, and, to the best of our knowledge also in Scandinavia. Although the OB/GYNs self‐reported a good level of competence in discussing sexual problems with their patients, they considered their competence in treating these problems to be poor. This finding was most evident among female OB/GYNs. The OB/GYNs indicated several barriers to bringing up sexual problems, among which “shortness of the appointment time” was the most important. Furthermore, “lack of knowledge about sexual medicine” and “lack of experience with sexual medicine” were highlighted. Interestingly, a minority reported facing barriers related to their “personal attitudes and beliefs”, their “personal discomfort”, or “disability of the patient”. Our findings bring to attention the need for continuing education in sexual medicine, which the OB/GYNs themselves also wished for. It was notable that the education in sexual medicine given in medical school and even in residency was considered to be insufficient. In our study, the most frequently reported barrier to bringing up sexual problems with patients was “shortness of the appointment time”. Our results confirmed the results of previous studies among OB/GYNs and urogynecologists in various countries. , , Similar barriers related to time have also been described in studies conducted in other specialties. , , Furthermore, we found that female OB/GYNs, younger OB/GYNs, and OB/GYNs who reported dealing with sexual health issues less often were more likely to report this barrier. Sexual health issues can be complex and, thus, undoubtedly time‐consuming to address. Therefore, methods facilitating for instance sexual history taking, such as computer applications and screening tools, could be useable. In addition, in our study, “lack of knowledge and experience with sexual medicine” and “fear of failing to respond to patients” sexual problems' were reported to be important barriers. Comparable findings have previously been reported among both OB/GYNs , , and general practitioners. , All these barriers emphasize the need for high‐quality and sufficient education. Indeed, the majority of the OB/GYNs in our study regarded that their education in sexual medicine was insufficient and the vast majority expressed the need for continuing education. These findings are similar to those of previous studies conducted among OB/GYN residents , and medical students. A crucial problem is that education in sexual medicine is fragmented and nonstandardized, and it also differs from country to country. Most medical programs dedicate only a few hours to sexual health content, the majority of which is focused on reproduction and disorders of anatomy rather than on practicing how to integrate sexual health into clinical anamnesis and conversations. It could be worthwhile to introduce efficient models, such as “Permission, Limited Information, Specific Suggestions, and Intensive Therapy (PLISSIT)”, and “Bring up, Explain, Tell, Time, Educate, Record” (BETTER)”, which are counseling models to both assess and manage a patient's sexuality concerns. According to our results, the male OB/GYNs self‐reported better competence in both discussing (tendency) and treating patients with sexual problems than the female OB/GYNs. In addition, the female OB/GYNs reported more barriers to bringing up sexual problems, especially those barriers concerning “lack of knowledge, experience, and effective treatment”. Our findings are in agreement with previous studies. , , According to a meta‐analysis, the male OB/GYNs provided higher self‐ratings than the female OB/GYNs in a number of areas, such as training, knowledge, performance, confidence, and ability. Furthermore, in a German study among urologists and urology residents, the male urologists self‐reported a higher level of confidence in taking care of patients with sexual‐related problems and self‐reported facing fewer barriers when addressing sexual health issues compared to female urologists. Compared to female respondents, male respondents have also been found to self‐rate themselves as having a higher level of competence in fields other than medicine (eg computer skills, grammar, and mathematics). Accordingly, these results may reflect the historical position of women in society compared to men, as the respondents may have unconsciously adopted these views. It is noteworthy that all these previous studies have evaluated self‐reported competence, not the actual competence. Thus, one important point would be to support the females' self‐confidence and self‐esteem. Another explanation might echo the fact that, during the past few years, the proportion of young female OB/GYN has grown in Finland, and only few young OB/GYNs are male; in our study, too, the male OB/GYNs were older and, therefore, probably more experienced, which could partly have affected our results concerning gender. The older OB/GYNs were less likely to report barriers to bringing up the issue compared to the younger OB/GYNs. This is a novel finding, and an explanation for it could be that the OB/GYNs' experience, self‐confidence, and interest in the topic grow as their careers progress, lessening the barriers to addressing the topic with patients. The life experience of senior OB/GYNs may also make it easier for them to speak about sensitive topics. In some previous studies, female , and younger physicians were found to be more likely to ask about sexual activity compared to male or older physicians. In these studies, however, the study aim differed from ours, as we studied the self‐rated competence and barriers, not the working methods. Possible reasons why some OB/GYNs reported a poor level of competence in treating sexual problems include the ineffectiveness of the available treatments. Thus, in addition to improving education in sexual medicine for physicians, there is a general need for more research on women's sexual problems. One of our study's merits was the questionnaire, which was previously used in different populations. , It contained a wide panel of questions and therefore provided feasible information about sexual medicine. In addition, the questionnaire was online, which permitted anonymous replies; thus, it was plausible that we received more honest answers compared to those obtained during personal interviews. The web‐based questionnaire was also a practical tool, allowing us to gather a large amount of information. Furthermore, the respondents could choose the place and time most convenient for them to fill out the survey. The questionnaire was designed not to progress if replies were missing, which guaranteed that the questionnaire was complete for every respondent. We enrolled participants from The Finnish Society of Obstetrics and Gynecology. Although our response rate was quite low, it fell within the range reported in previous studies among OB/GYNs (18%–65.6%) , , , , and was higher than that reported in a study among urologists (16%). During the last few decades, the general interest in taking part in surveys has declined. It is also noteworthy that there is a large group of retired OB/GYNs among the members of the Society, who do not practice and thus did not obviously reply to the survey. In addition, also physicians in other specialties can be members of The Finnish Society of Obstetrics and Gynecology; however, the preface of the questionnaire indicated that the survey was intended only for OB/GYN specialists and residents. In 2019, at the time of our survey, there were 680 specialists in OB/GYN in Finland under 65 years old. In our study, there were 231 specialists in OB/GYN under 65 years old; accordingly, the respondents to our survey represented one third (34%) of the specialists in OB/GYN in Finland in 2019. Furthermore, majority, 72%, of our respondents reported to work in a hospital, which was somewhat higher percentage than estimated in the report conducted by the Finnish Medical Association ( https://www.erikoisalani.fi/tulokset/16?emp=rt‐1 ). The fact that we sent an additional email to chief physicians of OB/GYN in hospitals in order to improve the response rate, plausibly led to a higher proportion of the OB/GYNs who were working in a hospital to respond to our survey. Nevertheless, 44% of them reported to practice also in the private sector, which is in concordance with the estimation of the Finnish Medical Association. However, we are fully aware that, with a higher response rate, our results would have been more reliable and easier to interpret and expand to apply to OB/GYNs in general. The information about nonresponders was not available for comparison as we did not have any access to the actual register of the Society. Therefore, our results could be distorted by the fact that the OB/GYNs who were more interested in sexual medicine were keener to complete our survey. However, by assessing the information related to the daily number of patients with sexual health issues and by sorting the data in the analyses accordingly, we could evaluate that effect. Furthermore, the information regarding the participants' former education was retrospective, going back several decades for some of the respondents. Females were more strongly represented among our respondents; however, the gender ratio correlated with that of the OB/GYNs in Finland. Thus, the findings regarding gender differences should be confirmed using larger samples. Last, our study included only Finnish OB/GYNs; therefore, our results might not be directly applicable to OB/GYNs in other countries. However, our respondents likely formed a consistent study group, as Finland is a racially and culturally homogenous country. CONCLUSION The Finnish OB/GYNs who participated in this study self‐reported a good competence in discussing sexual problems with their patients, whereas their competence in treating these problems was self‐evaluated as being poor. Several barriers to bringing up sexual problems emerged, including a lack of time during appointments. Our study clearly showed the great need for continuing education in sexual medicine, as most of the OB/GYNs considered their education to be insufficient and expressed a need for more education. Implementing sexual medicine as one of the learning objectives in the curriculum of the OB/GYN specialist degree could diminish the identified barriers in future. AA the principal investigator and writer of the paper. PP‐K and KK are the leaders and co‐writers of the study. JG and S‐MM are the co‐investigators and co‐writers of the paper. MR and TV are the statisticians of the study. This study was financially supported by Satasairaala Central Hospital (EVO grant, Anna Aromaa) and Turku University Hospital Foundation (Anna Aromaa). None. |
Exploring the role of the human microbiome in forensic identification: opportunities and challenges | 60d852de-575e-4e34-be0b-7d6f2013ab4e | 11306296 | Forensic Medicine[mh] | Human microbiome, as Lederberg coined, represents the collection of genome sequences from “the ecological community of commensal, symbiotic, and pathogenic microorganisms that share our body space”, including fungi, bacteria, protozoa, and viruses, that compose the microbiota . One of the major potential advantages of microbiome analysis used in human forensic identification could be the uniqueness of the microbial community in each person. According to several studies, the human microbiome consists of 10–100 trillion symbiotic microbial cells unique to each individual. Moreover, in a reference man (age 20–30 years; weight 70 kg, height 170 cm) the count is estimated to be 3.8 × 10 13 cells, with a total mass of 0.2 kg . These organisms are distributed through the different anatomical sites, according to which they present a specific taxonomic composition. A taxon refers to any group or rank in a biological classification, such as a phylum, order, family, genus, or species, into which related organisms are classified. Despite considerable interpersonal variability, the core microbiome represents a collection of bacterial communities shared within individuals, for example Propionibacterium acnes, a commensal of human skin. The relationship between humans and their microbiome offers a reservoir of information, that could be useful for identification. Since human identification plays a primary role in forensic for many legal reasons, including criminal matters such as guilt and impersonation, civil issues, such as inheritance or reunification of orphaned children with other relatives, administrative, ethical, and humanitarian reasons , the present review aims at updating the microbiome study in forensic human identification, shedding light on how forensic microbiology entities are reshaping the landscape of forensic investigations. The purpose is also to focus on its applications, benefits, limitations, and future perspectives, in order to understand the robustness and reliability of such studies, and their applications in Court. Moreover, this paper can serve as a valuable resource for forensic practitioners confronted with the challenge of identifying unknown individuals using forensic microbiology techniques, particularly in cases where other methods cannot be used. Eligibility criteria This systematic review was conducted in adherence to the guidelines stipulated by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) . Search criteria and critical appraisal A comprehensive review of literature and a thorough evaluation of the gathered studies were undertaken. The databases PubMed, Science Direct Scopus, and Excerpta Medica Database (EMBASE) were utilized to carry out the analysis, spanning from their establishment to October 2023. The following query was used: (“forensic microbiology” OR “microbial forensics” OR “forensic microbial analysis” OR “microbiological evidence”) AND (“human identification” OR “biological identification” OR “identity determination” OR “forensic identification”) . Results were then filtered for publications in English, resulting in 35 publications. For each paper included in the literature review, the title, authors, journal, year, and type of publication were extracted. Bibliographies of all identified papers were reviewed and compared to identify additional relevant literature. Methodological evaluation of each study was conducted according to PRISMA standards, including assessment of bias. All researchers independently reviewed the papers for which the title or abstract appeared relevant. Disagreements on eligibility among researchers were resolved by a consensus process. All researchers independently reviewed papers for which the title or abstract appeared relevant and selected those that analyzed microbiome with “human identification”. In the screening phase, publications clearly falling out of scope with respect to the aim of this review were excluded. After the screening phase, 19 publications were assessed as eligible for full-text assessment. Finally, 46 articles were added through backward search (analyzing the cited references in the selected articles), resulting in further 34 articles eligible for full-text assessment. Finally, 22 articles were included in the systematic review. Figure shows the PRISMA chart which synthetically describes the screening and the inclusion process of the selected articles. Risk of bias Highlights of this systematic review include number and breadth of the collected studies, which span the globe; the hand search and scan of reference lists for the identification of all relevant studies; and a flowchart that describe in detail the study selection process. Despite our efforts to fairly evaluate the existing literature, this review includes studies that were published in a time frame of few decades; thus, these results should be interpreted considering that the accuracy of scientific procedures may change over the years, especially in the field of molecular biology. This systematic review was conducted in adherence to the guidelines stipulated by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) . A comprehensive review of literature and a thorough evaluation of the gathered studies were undertaken. The databases PubMed, Science Direct Scopus, and Excerpta Medica Database (EMBASE) were utilized to carry out the analysis, spanning from their establishment to October 2023. The following query was used: (“forensic microbiology” OR “microbial forensics” OR “forensic microbial analysis” OR “microbiological evidence”) AND (“human identification” OR “biological identification” OR “identity determination” OR “forensic identification”) . Results were then filtered for publications in English, resulting in 35 publications. For each paper included in the literature review, the title, authors, journal, year, and type of publication were extracted. Bibliographies of all identified papers were reviewed and compared to identify additional relevant literature. Methodological evaluation of each study was conducted according to PRISMA standards, including assessment of bias. All researchers independently reviewed the papers for which the title or abstract appeared relevant. Disagreements on eligibility among researchers were resolved by a consensus process. All researchers independently reviewed papers for which the title or abstract appeared relevant and selected those that analyzed microbiome with “human identification”. In the screening phase, publications clearly falling out of scope with respect to the aim of this review were excluded. After the screening phase, 19 publications were assessed as eligible for full-text assessment. Finally, 46 articles were added through backward search (analyzing the cited references in the selected articles), resulting in further 34 articles eligible for full-text assessment. Finally, 22 articles were included in the systematic review. Figure shows the PRISMA chart which synthetically describes the screening and the inclusion process of the selected articles. Highlights of this systematic review include number and breadth of the collected studies, which span the globe; the hand search and scan of reference lists for the identification of all relevant studies; and a flowchart that describe in detail the study selection process. Despite our efforts to fairly evaluate the existing literature, this review includes studies that were published in a time frame of few decades; thus, these results should be interpreted considering that the accuracy of scientific procedures may change over the years, especially in the field of molecular biology. Twenty-two papers dealing with microbiome and human identification that fulfilled the inclusion criteria were included in the investigation . The main characteristics of the articles including authors, year, reference number, sample, main findings, and limitations, are comprehensively reported in Table . No case reports or reviews were selected. According to the different aspects of identification, the results were categorized into the following four topics: I. personal microbiome and transfer to the surrounding environment;. II. microbiome as indicator of biological profiling features; III. geolocalization; IV. determination of sexual contact. At the end of each article, the individual limits are discussed in detail, whereas the main limitations, common to the various studies analysed, are summarized and discussed in a separate section. Personal microbiome and transfer to the surrounding environment According to the Locard’s Exchange principle which posits that “every contact leaves a trace” , human microbial communities have been studied to understand their role in binding an individual to the surrounding environment, as a “personal microbial cloud” . Fierer et al. conducted three studies to demonstrate the potential utility of human microbiome for forensic identification. In the first one, they compared bacterial communities on individual keys of three computer keyboards to the communities found on the fingers of the keyboard owners. In the second one, they linked objects to specific individuals by comparing the bacteria on their computer mice against a database containing bacterial community information for more than 250 hand surfaces, including the hand of the owner. Analyzing bacterial 16 S rRNA gene sequences, they found a degree of similarity between bacterial communities (represented in plots generated using the pairwise unweighted and weighted Unifrac distances) on fingertips of the three individuals sampled and their respective keyboards. They also demonstrated that the fingertips of an individual bacterial communities are more similar to those found on the keys of that individual’s keyboard than to those communities found on key-board keys not touched by the individual. In the last study, they aimed to determine whether bacteria on a personal object more closely resembled the owner’s skin bacteria than those of the general population. They calculated the phylogenetic distance between the bacterial communities on 9 personal computer mice and mouse owner’s hand, comparing it to the distances between the mouse bacterial communities and the communities on 270 hands that had never touched the mouse. They found that in nearly all cases the bacterial community on a given mouse was significantly more similar (using the unweighted and weighted Unifrac distances) to those on the owner’s hand than to the hands in the database. They found that there was a similarity between the microbiome present on personal items and the subject to which they belonged, suggesting direct transfer of bacteria from fingertips. In all nine cases, the bacterial community on each mouse was significantly similar to the community in the owner’s hand than in the other hands in the database. The study also considered the effect of storage conditions on collected skin-associated bacterial communities, revealing that these conditions had little to no influence on bacterial community composition for up to 14 days. Regarding this point, laboratory conditions as typical of indoor environments (temperature at 20 °C and fluorescent lighting on for 8 h a day), although necessary for the study, differed significantly from the reality. Despite these conclusions, the sample size and the selection of individuals who worked within the same building (two individuals from the keyboard study shared the same office space) could represent limitation to the forensic application . In their studies, Schmedes et al. firstly collected samples from 14 skin body sites from 12 healthy individuals sampled at three time points over a 2.5-year period. They identified stable clade-specific markers that provided individualizing resolution at each body site. It was based on skin microbiome profiles generated using nucleotide diversity (i.e., a measure of strain-level heterogeneity of the microbial population) of each marker. They used Proprionibacterium acnes pangenome presence/absence features and the nucleotide diversities of clade-specific markers to identify stable features which can be used to attribute skin microbiomes from multiple body sites to their respective hosts. The manubrium site and the hypothenar palm yield highly accurate rates of classification (97% and 96% respectively). Nucleotide diversity of stable markers reached accuracies as high as 100% from cheek, inguinal crease and popliteal fossa and contributed significantly greater to classification accuracies than presence/absence features ( p < 0.01) . They also developed a novel targeted sequencing model, the hidSkinPlaex, to attribute skin microbiomes collected from eight individuals from three body sites (i.e., foot, hand and manubrium) to their host donor. Three replicate samples were collected from each body site for a total of nine swabs collected per individual ( n = 72). The panel consisted of 286 clade-specific markers from 22 bacterial with > 65% of the markers from P. acnes. Skin microbiome profiles were assessed using subsets of universal (i.e., markers common to all individuals and body sites) and non-universal markers (i.e., all markers present across all samples). The comparison between these two categories showed an accuracy (i.e. the percentage of samples classified correctly) higher and statistically positive ( p < 0.00001) using enriched hidSkinPlex markers from foot microbiome, as opposed to markers from shotgun data. Enrichment of hidSkinPlex markers provided the capability to identify skin microbiomes from individuals when the body site was unknown to the classifier with up to 97% accuracy using markers shared across the three body sites. It also gave the ability to identify the body site origin of the skin microbiome sample with up to 86% accuracy. Thus, the hidSkinPlex could serve a dual purpose, providing a method to not only identify individuals but also predict the body site origin of skin microbiome samples . These studies have highlighted these principal limitations, also reported by the authors: the laboratory bacterial contamination, the sharing of microbial communities between individuals (for example cohabiting couples and family members), the need to analyze further markers of bacterial genus and the stability of skin microbiomes collected over time intervals, the latter not analyzed in this study. Park et al. collected samples from 15 individuals (right-handed and healthy, 4 smoking and one who had taken an antibiotic), exploring microbial communities inhabiting their palms obtained by hand-printing and using culture-based methods. A total of 686 bacterial strains were isolated (only with aerobic cultivation) and identified based on 16 S rRNA gene sequence analysis. The genus Staphylococcus was detected in all participants, and Micrococcus and Enhydrobacter were detected in most participants (87% and 80% of the cases, respectively). Despite the small number of the sample, some minor species were unique for specific individuals. They concluded explaining that some major species could also applied as molecular biological markers at subspecies level and minor species could be potential used to human identification. The sample size and the inclusion of individuals with characteristics that could have influenced the results represent major limitations. For example, smoke and taking antibiotics were not explored by the authors . Watanabe et al. investigated the contribution of minor skin taxa to the effectiveness of personal identification, selecting the forehead microbiome as a skin microbiome model, due to the presumed minor contact of this part of the body with objects or other individuals (considering some skin parameters as moisture, pH and sebum). They recruited 11 individuals (original dataset) and collected 66 forehead microbiome samples at six different time points over two years (33 samples each year). To assess the microbial taxonomic composition of each sample, the 16 S rDNA were PCR amplified. They calculated the Canberra distance between a query sample (unknown individuals) and reference samples (known individuals). They evaluated a personal identification accuracy of 95% (63/66). Moreover, they tested the accuracy acquiring data in different years. Using 3 reference samples from the first year and 3 query samples from the second year, they found the accuracy to be 85% (28/33). Furthermore, they evaluated the method using a public dataset (89 individuals) and calculated a personal identification accuracy of 78% (663/837), noting that the accuracy of personal identification increased with higher reference samples per individuals. In this study authors revealed that the taxonomic composition of the skin microbiome was mostly stable over a short period (i.e. up to a few months) but fluctuated slightly over extended periods (i.e. >1 year), suggesting that the intra-individual taxonomic composition of the human skin microbial community was relatively stable. Despite these promising results, the stability of the microbiome should be studied over longer periods of time, using a larger number of individuals and testing other body parts, considering all specific influencing factors. In fact, this is one of the few studies that uses the forehead as a source of microbiome, which has been hypothesized to be less influenced by external contact (e.g. sebum production). On the contrary, more studies on larger populations should verify the influence of other factors on bacterial communities (including those proposed to the authors themselves) . Neckovic et al. considered the potential for human skin microbiomes to be transferred between non-cohabitating individuals, and from an individual to substrates, through direct and indirect contact. They involved six participants placed into three pairs, taking part in direct and indirect modes of transfer. The first mode was measured through the act of a handshake with another individual, followed by contact with a substrate. The second mode involved individuals rubbing a substrate in their left hand, swapping substrates with their partner and then rubbing the swapped substrate in their left hand. A total of 65 samples underwent 16 S rRNA sequencing. The Jaccard distances (a proximity measurement used to compute the similarity between two objects: a value of 0 is indicated as the distance between a sample and itself, whereas a value closer to 1 would indicate a greater distance and therefore, less similarity in microbial community composition) between the reference samples of each participant were all greater than 0.8, meaning there was dissimilarity in the microbial compositions of the skin microbiomes between participants. Each individual reference sample was observed to cluster either within or around the samples of each respective pair, exhibiting closer distances to their mixed samples than to those belonging to another participant pair. The statistical results, illustrated in plots and based on Jaccard and unweighted UniFrac distances between samples, revealed distinct clustering of participant pairs. This suggested that, following direct or indirect transfer of hand-associated microbiomes, this form of analysis may be used to associate individuals with other individuals and/or substrates. The forensic application of the results could be hindered by some elements, first of all, the short sampling time (within three days) which does not allow the transitions or variation in microbial to be appropriately assessed. Furthermore, several factors (such as the relative surface areas contacting each other, the level of pressure and friction applied during the contact, and the duration of the contact) that may influence the microbiome detected on hands and skin should be taken into consideration. Finally, for the purposes of applicability to the real context, contamination risks associated with all people or objects that came into direct contact with the skin/body site in a specified period and with the type of interaction should be considered. These results should also be integrated with the introduction of negative controls, i.e. free from contaminating microbial DNA . Lax et al. recruited two participants to sample their phones, soles of their shoes and the floor over the course of two 12-hour time periods on two consecutive days. A further 89 participants took individual samples of their shoes and phones at three different scientific conferences. Random forest models were used to determine which of the two individuals’ shoes a sample was taken from, correctly classifying samples more than 50 times as effectively as one would expect by chance. In phone samples, the models were able to classify the participant a phone sample was taken from (error ratio of 13.6). Random forest models were able to determine which of the three conferences a sample was taken from significantly better than expected by chance for both the shoe and phone environments (error ratio = 11.7 and 8.0, respectively). Regarding the stability of microbial community, they analyzed the dissimilarity in community composition and considered the phylogenetic distance. They assessed that phone-associated microbial communities were observed to be both less stable (higher median distance) and more variable in their rate of change over time (broader distribution) than shoe-associated communities. They hypothesized that the high volatility of phone-associated microbial communities was likely due to a small microbial biomass that would be prone to a rapid turnover in community composition and to the very high volatility of hand-associated microbiota . They showed temporal variability in the differentiation in the shoe microbial communities of these two different people. In contrast, the models were unable to determine the specific site where the sample had been taken (for all substrates analyzed). They hypothesized that this was due to the homogenization of communities across the shoe sole over time or to rapid changes in community structure at each sampling site. This study suggests how the microbiome can be used to trace objects to their owners and to lead an individual back to a place. The short sampling time (two days), the small sample and the few substrates analyzed (telephone, floor and sole) represent the major limitations. Furthermore, it should be explored the surface-associated microbial community and whether shoe sole material and turnover could influence bacterial communities. Meadow et al. characterized microbial communities on seventeen individuals’ smartphone touchscreens sampled from the touch-surfaces of their own mobile phone, as well as their own thumb and index finger on their dominant hand (3 samples for each of 17 participants, with a total of 51 samples). They found that the two fingers from each participant had significantly more in common than either did with phones ( p < 0.001 for both fingers). Handwashing made an insignificant difference in the resemblance of the two fingers ( p = 0.126) and in the finger/phone connection ( p = 0.7). Women’s fingers appeared to share more operational taxonomic units (OTUs) with their phones than men, but the difference was not significant ( p = 0.128), since both shared more OTUs on average, with their own phones than with anyone else’s. Indeed, they found that an individual’s finger shared on average 5% more OTUs with his or her own phone than with everyone else’s phones ( p < 0.001). The author explained several limitations of their study: sample size, the design of the study as a teaching exercise, the lack of information about the environmental process of breeding microbes on phone’s touchscreen and the factors that could influence it (e.g., material type, temperature, pH, humidity, exposure to ultraviolet light and substrate availability). Furthermore, the authors only considered mobile phones equipped with touchscreens (smartphones) and not those equipped with a keyboard, neither they distinguished hand washing methods that could also influence the results . Costello et al. conducted a study on the spatial and temporal distribution of the human microbiota, surveying bacteria from up to 27 sites in 7–9 adults on four occasions. They collected 815 samples and found each habitat harboring a characteristic microbiota and a relative stable set of abundant taxa across people and over time. They assessed differences in overall bacterial community composition using UniFrac metric (a small distance implies that two communities are similar). For each sample, variable region 2 (V2) of the bacterial 16 S rRNA gene was PCR-amplified. They detected a characteristic microbiota for each habitat and a relatively stable set of abundant taxa across people and over time. Indeed, they found that composition varied significantly less within habitats than between habitats. Within habitats, variation was significantly less within individuals sampled over time than between individuals on a given day. After accounting for habitat and host individual, variation was significantly less over 24 h than over 3 months ( p < 0.01). Despite the strong inter- and intrapersonal structuring of bacterial diversity, a high degree of spatial and temporal variability was also evident: about 12% of phylotypes appeared on all dates, while 3% of phylotypes appeared in all individuals, and only 0.1% of phylotypes appeared in all body habitats . Despite these results, a longer observation period and studies on influencing factors like local chemistry and nutrient availability are needed. For example, the forehead has been identified as a more susceptible site to external factors (mainly the production of sebum). Microbiome as indicator of biological profiling features Phan et al. investigated how the bacterial profile could be used as an indicator of donor characteristics such as sex and ethnicity. In their study, forty-five individuals were asked to hold an autoclave-sterilised playing card, which was subsequently swabbed and the samples collected over the course of two weeks. The difference in microbiota diversity was examined using weighted (quantitative assessment) and unweighted (qualitative assessment) UniFrac distances. They found Alloiococcus species could be a potential biomarker for sex (64% accuracy rate, indicating male donor) and ethnicity (56% accuracy rate, indicating donors of Caucasian and mixed ethnicities). In addition, other characteristics (including diet and use of hand sanitizer) were also investigated. Analysis showed Lactococcus as a marker for Chinese diet type with a 48% prediction accuracy rate. Finally, concerning the use of hand sanitisers, Alloiococcus was present in only 43% of the bacterial traces from donors who used hand sanitisers compared to being present in 72% of the bacterial traces from donors who did not use hand sanitizers, with a 51% accuracy rate ( p = 0.003 for unweighted Unifrac distances of microbial community) . The limitations highlighted in this article were the sample size, the large standard deviation in samples, the bias of subjects introduced (all university students) and the low robustness of the predictive models for most features tested, such as sex. Furthermore, in this study no information was noted about the history of contact of the subjects with other objects or the presence of cohabitants or coexistence with pets. More in-depth analyzes could also reveal similar results using a different substrate. Finally, as this study examined a single time point, it is unknown whether any of the identified bacteria would remain or be in similar abundance in subsequent sampling. Expanding the sample size, diversity of the subjects and temporal scope would yield a greater wealth of information on the potential links between microbial signatures and donor characteristics of forensic interest . Regarding sex, Richardson et al. collected personal samples from the hands and other objects in the room of 37 students living in a common dormitory, distributed across 28 distinct dorm rooms. Through the study of specific microbial taxa, they identified the sex of the subject with the DESeq2, a statistical method for differential analysis of count data, using shrinkage estimation for scatters and fold changes to improve stability and interpretation of estimates. Examining Lactobacillus and Corynebacteria species, the random forest model was able to predict whether a subject was male or female, with an error ratio of about 2.5, and accuracy of around 80% on the test set. The major limitation of this study was the presence of roommates, since interactions between individuals involve an exchange of bacterial communities and therefore a decrease of differences in taxon abundance. In this study, an individual’s classification error was linearly related to the number of roommates that individual had, with classification error increasing by 18% points for each additional roommate . Fierer et al. collected samples from the palmar surfaces of both the hands of 51 students to characterize bacterial diversity on hands and to assess its variability within and between individuals. They observed intra- and interpersonal variation in bacterial community composition: hands from the same individual shared only 17% of their phylotypes, with different individuals sharing only 13%. This intraindividual differentiation between the bacterial communities on left and right hands was not significantly affected by handedness, sex, or hand hygiene ( p < 0.05 in all cases). Men and women harbor significantly different bacterial communities on their hand surfaces ( p < 0.001). In this article the limitations could be the presence of a sample restricted to a population of students and the lack of detailed information on the skin characteristics of the sampled individuals. So, it could be difficult to understand whether sex differences in bacterial communities on the hands may be due to skin factors, for example pH, sweat or sebum production, frequency of moisturizer or cosmetics application, skin thickness or hormone production . Bell et al. examined thanatomicrobiome (i.e., postmortem microbiome) by collecting heart samples from 10 individuals who died of sudden cardiac arrest with times of death ranging from 6 to 56 h. They amplified the V1-V2 and V4 hypervariable regions of prokaryotic 16 S RNA genes. Individual OTUs were examined and the relative abundances of the most abundant microbial taxa in all samples relative to region (V1-2 and V4) were determined. Their study revealed a distinction in the heart thanatomicrobiome of male and female corpses at all taxonomic levels. For example, at the order level, Lactobacillales and Rhizobiales were only detected in males and Pseudomondales in females. Their results showed that sex-dependent changes in the thanatomicrobiome composition were statistically significant ( p < 0.005). In this study, apart from the small sample size, the major limitation is due to the lack of in-depth analysis of the variability of the bacterial community based on the time elapsed since death. Furthermore, because of the only substrate used was the heart, these results should be validated taking other substrates into consideration . Tridico et al. surveyed bacterial communities, associated with human scalp and pubic hair, from seven healthy Caucasian individuals of both sexes (two of whom were in a relationship), ranging in age from 23 to 53 years old. Samples were collected at three time points, initial collection in addition to 2 and 5 months thereafter. Forty-two pools of DNA extracts were obtained from human scalp and pubic hairs. Data generated from pubic hair held (using next generation sequencing) revealed a dichotomy between OTUs on male and female pubic hair shafts. Lactobacillus spp. was found in the female pubic hair samples and not in the male samples (excepting in the co-habiting male). Instead, similar microbial taxa were observed in the cohabiting couple, suggesting interindividual transfer, especially after sexual intercourse. In contrast to the pubic hairs, scalp hair microbiota showed no correlation with the sex of the donor. Moreover, pubic hair microbiomes appeared to be less influenced by environmental bacteria than scalp hair . The temporal stability study found that pubic hair bacteria may be more temporally stable than scalp hair bacteria and therefore potentially of greater evidentiary value than scalp hair bacteria. Data showed that about 17% of pubic hair bacterial OTUs were temporally stable at all time points; while, on average, scalp hair hosted approximately 5% bacterial OTUs. Despite these findings, more studies should be conducted on the role of bacterial transfer during contact, the temporal persistence of bacteria after transfer and sample storage conditions. In fact, the forensic application could also be useful in cases of suspected sexual violence. The temporal persistence of the bacterial community on the pubic hair should be studied, especially since the examination on the victim is often not carried out acutely. Pechal et al. studied the thanatomicrobiome as a sign antemortem health condition, which could be used to complement the biological profile. They analyzed microbial taxonomic profiles from a total of 83 cases (less than 24 h postmortem), divided into two groups: cases with evidence of heart disease detected during autopsy and cases with death resulting from violent. Heart disease was based on examination of the heart (including microscopic analysis) and medical history. To assess whether there were statistical associations between the postmortem microbiome and antemortem health status, they ran binomial logistic regression models to contrast community diversity with heart disease. They examined the bacterial community from the mouth, finding phylogenetic diversity in cases of heart disease (with significant predictive factor, P = 0.038). In contrast, individuals whose death was due to violent circumstances had greater microbial diversity. These data suggested that increased microbial biodiversity may be an indicator of individuals without chronic health conditions, such as heart disease. This study could be biased by the age of the subjects included (in the original dataset, 44 ± 15 years), as heart diseases typically appear later in life and are chronic conditions, whereas violent deaths tend to involve younger individuals. Studies evaluating the bacterial community at multiple collection times (for example, near, at and after death) should be conducted . Geolocation In 2010 the Earth Microbiome Project (EMP) was founded . It represents a systematic attempt to characterize global microbial taxonomy with the aim of understanding biogeographical variations and the factors, such as climate, altitude, latitude, or soil nature, that determine them. Indeed, the characterization of the microbiome may provide information on the geographical origin of the individual. In their study, Nagasawa et al. developed a method to determine the geographic origin of 17 cadavers with known geographic origins by examining the H. pylori vacA region polymorphism. VacA is a cytotoxin that comprises two variable parts in the region of the vacA gene: the s-region (s1 and s2) and the m-region (m1 and m2). East Asian H. pylori strains are associated with the vacA s1 type; within East Asian countries, the m1 type predominates in Japan and Korea, whereas the prevalence of the m2 type gradually increases in the southern parts of East Asia. Phylogenetic tree of H. pylori showed 3 major clusters consisting of the East Asian type I, including Japan, China and South Korea, the Western type II, including Russia, the Americas and Europe, and the Southeast Asia type III, including Thailand, Hong Kong, Taiwan, and Vietnam. All the Japanese ( n = 10), South Korean ( n = 1), and Chinese ( n = 2) cadavers examined in the present study were classified as type I, the single Thai cadaver was classified as type III, and the single Afghan and Filipino-Western cadavers were classified as type II. Even if Filipinos and Taiwanese are typically classified in the type III cluster, different classification in this study could be due to external factors. In fact, the Taiwanese cadaver was classified as type I, probably since the individual was recorded as being an ethnic Taiwanese, but had lived in Japan from childhood. These findings demonstrate the influence of geographic and latent origin of the cadavers on this method. These considerations recall the difference between geographical origin and ethnicity, which is still provided by the analysis of the polymorphism of the human genome. More studies should be conducted including more geographical origins and knowing the background details of the analyzed sample, mostly unknown in this article . Escobar et al. described the composition of the gut microbiota comparing Colombian adults with different geographic origin (USA, Europe, Japan and South Korea). They included a total of 126 individuals, of which 30 were Colombian. Each participant collected a fecal sample. They found that the gut microbiota of Colombians was mostly composed of Firmicutes (average ± SD: 79 ± 13%) and Bacteroidetes (17 ± 12%), followed by other phyla present in minor frequencies. The remaining datasets had lower proportions of Firmicutes and higher proportions of Bacteroidetes but dispersion of data among individuals was equally notorious than in the Colombian dataset. The UniFrac analysis indicated that the gut microbiota of Colombians was significantly different from that of Americans, Europeans, and Asians ( p = 0.001). Moreover, they found that the relative abundance of Firmicutes decreased with latitude ( p = 0.002) and that of Bacteroidetes increased with latitude ( p = 0.001). The authors highlighted that the sample size was not designed to achieve statistical power due to the lack of previous data on Colombians and the highly variable results of studies performed on other populations. Moreover, due to the influence between geographic origin and diet, they concluded that it would be interesting to tease apart the effect of diet and geography on the composition of the gut microbiota . Brinkac et al. conducted a study comparing the variation in scalp and pubic hair microbiome across different geographic origins. They collected hair samples derived from scalp and pubic areas from adults residing in Maryland (MD, n = 8) and California (CA, n = 8). Additionally, scalp hairs were collected from adults residing in Virginia (VA, n = 5). Each individual provided multiple samples for a total of 42 and 32 hair samples from scalp and pubis respectively. They observed that the Peptoniphilus and Staphylococcus genera had different sample abundances between MD and CA, with no significant clustering by geographic location in each hair type. Compared to hair, the analysis of pubic hair revealed a higher error rate (22.58% compared to 17.24% for hair), suggesting that scalp hair had greater geolocation prediction power than pubic hair. More studies should be included to understand hair characteristics that may influence these results: for example, length, hair collection technique (cut or plucked), sebum production, environmental or lifestyle factors. Increasing sample sizes and performing longitudinal studies would help further clarify the usefulness of both scalp and pubic hair as indicators of forensic information . Determination of sexual contact The human microbiome has been hypothesized to be potentially useful in studies investigating its transfer during sexual contact. Ghemrawi et al. described genital microbial signatures based on the analysis of five male and five female genital samples (for a total of 10 samples) and compared these results to those from longitudinal studies. They did not include couples in the study, and no information was collected regarding recent intercourse. The shotgun sequencing results showed taxonomic diversity and richness of the penile microbiome, as opposed to the vaginal ones which were composed predominantly of lactobacilli (about 76% of the total vaginal composition). The authors classified this study as a “pilot study,” which should be complemented with a larger sample and longitudinal studies. In fact, some factors that could have influenced the results should be connected: on collection time, the absence of information on previous sexual intercourse and the presence of other variables (for example circumcision or the day of the menstrual cycle on which the sample was taken) . Williams et al. collected microbiome profiles from pubic hairs and/or swabs taken from the pubic mound region of 43 participants (including 12 partner pairs). Participants provided 1 to 5 sets of sample collections (in 3 set time points) resulting in 155 completed sample collections. Individuals were stratified based on many characteristics, such as sex, age, ethnicity, sexual activity, condom use, and oral to genital contact. Results showed that the two couples who did not report sex in the seven days prior to sample collection for any of the time points were the only couples whose male and female samples consistently fell into separate clusters About the influence of the level of sexual activity, they found a significant correlation between the proportion of couple co- clustering and the average number of times the couple reported having sex during the seven days preceding each sample collection. Increased frequency of sexual activity didn’t however guarantee increased microbiome similarity (for instance, two couples were similarly sexually active but clustered together 33% and 80% of the time, respectively). This result established that sexual activity per se was not sufficient to ensure microbiome sample sharing and made it unlikely that a single incidence of intercourse could always result in detectable transfer. This study would require a larger sample size and greater control over some variables. For example, we do not know whether voluntary sexual contact may have different characteristics than that conducted by force. Furthermore, controlled studies involving collection of samples immediately prior to sexual contact and then at fixed time points after it would serve to quantify the variability in proportion of transfer both to hairs and the pubic mound, and for how long any mixing is retained . Dixon et al. studied the variation of bacterial communities in six male-female sexual partner pairs before and after sexual intercourse, also controlling for female cyclic variation and selecting strict parameters to simulate a single episode of penetrative sexual encounter. Five replicate swabs (penil skin and vaginal) were collected for each participant and timepoint, totaling 20 per couple. (10 male, 10 female). Taxonomic analysis found that in both male and female samples, there was an increase in the total genera observed post-coitus. The most notable change in abundance postcoitus was the increase in male samples of the dominant female taxa, Lactobacillus. Few changes were observed in female. In three female samples, an increase in the distance between the samples was observed before and after coitus, while the male samples observed a progressive clustering after coitus. In contrast, in a pair, female before-and-after samples are tightly clustered, while male samples have a larger distance between each other. The authors hypothesized that both the male and female genital microbiomes might be susceptible to alteration by the opposite sex. Despite these results, the authors highlighted some limitations. They did not know what specific intimate behaviors occurred during their sexual encounter, making it difficult to hypothesize a relationship between microbial diversity and intercourse effect. Therefore, larger study groups should analyze circumcision as a penile skin variable and evaluate additional time points to assess microbiome recovery. Finally, they did not consider that the partners could be sexually active during menstruation, and it is also conceivable that the volunteers did not respect the abstinence period. They should also collect more information on people’s health and the time of sampling, in order to reduce accidental factors or contamination . Since bite mark injuries could be present in sexual abuse, Kennedy et al. assessed the matching oral streptococcal DNA sequences from bite marks to those obtained from the teeth responsible. They also evaluate the capability of three genomic regions of streptococcal DNA to discriminate between participant samples. They enrolled 16 individuals who generated self-inflicted bites on their upper arms. The following genetic targets were examined: the hypervariable regional 9 of streptococcal 16 S rRNA gene, a stretch of noncoding DNA located between the 16 S and 23 S rRNA genes (ITS), and a stretch encoding the beta subunit of bacterial RNA polymerase (rpoB). The 16 S rRNA model revealed a sensitivity of 100%, with a 25% false positive rate. The ITS model found a 65% chance of obtaining a false positive. Finally, the rpoB model matched all bite marks to the corresponding teeth. Upon achieving perfect discrimination, they demonstrated the complete ability to differentiate between samples from teeth responsible for a bite and those not responsible. A major limitation of this study, besides the sample size, is that the bite marks were self-inflicted, and it did not analyze how diseases affecting dental elements, such as cavities, could lead to microbiome variability. Furthermore, adapting this study to reality, it is not known how long the microbiome left by the bite can survive temporarily and whether it can be influenced by a microbiome not only from a different site of the body, but from another individual . Limitations For the human microbiome to be effectively applied to identification in forensic science, it must exhibit temporal stability and specificity to particular body sites and to sex. Furthermore, the mechanisms that involve the transfer should be explored in depth, so that the variables that may influence the changes can be predicted. These variables are divided into environmental factors, lifestyle choices, and internal factors, which also include the subject’s state of health. In the selected studies, specific limitations were identified and described. Furthermore, all studies share the following limitations: the instability of the microbiome to intrinsic and extrinsic factors, for example the use of antibiotics or the presence within the subject of a certain disease or hormonal factors that modify the microbiome; the difficulties in maintaining the ideal conditions in carrying out the sampling, transport, and treatment of the microbial community: in fact, different microbial populations may require different protocols. Furthermore, according to what is accepted by the scientific community, a valid protocol should have been tested in field conditions, whether it has been subjected to peer review, whether the rate of error is known, standardization and whether it has been generally accepted. This scientific methodology is the way to ensure the reproducibility and comparability of research results to be applied in concrete cases of judicial investigation (with the same safeguards in terms of privacy and confidentiality as any other human tissue samples or identifying sources of information); the presence of contaminants of human, environmental and other living beings, such as animals and insects; the sample size of the studies presented in this review, which appears to be still too small; the standardized definition of changes in the microbial community in the postmortem period. The postmortem human microbiome, such as Javan et al. reported, includes two components: the thanatomicrobiome, consisting of microbes that inhabit internal organs and body fluids after death, and epinecrotic microbial communities, represented by microbes found on the surface of decaying remains . In fact, the thanatomicrobiome is conditioned by many endogenous and exogenous factors, including climatic conditions and the presence of animals, as well as by postmortem translocation and agonal diffusion phenomena; almost all of the studies presented (with the exception of the study by Kodama et al. ) were constructed according to a rigid design to control interfering variables (e.g., hand washing in subjects, voluntary recruitment of subjects). Nonetheless, such a study design could be difficult to adapt to real forensic applications; many studies should be conducted to identify a common matrix (i.e. a sampling site) less influenced by external and internal factors. For example, hands represent a useful site because they are more involved in contacts, but also more susceptible to confounding factors. On the contrary, for example the forehead is less susceptible to external contacts but influenced by individual factors (for example sebum production); robust information on the stability of the microbiome over time is lacking. Some studies included in this review explore these differences by including some time set points. However, information on baseline time and long term is often lacking. Furthermore, it remains unclear how long microbial fingerprints can last unchanged on skin and surfaces, making it possible to analyze these traces and obtain reliable results. It was demonstrated, in fact, that skin microbiota shed by an individual can change over time, undergoing degradation within hours. A temporal variation was observed in human skin microbial composition. These factors can constitute a great limit to microbiome-based methods of human identification. Kodama et al. conducted a study where postmortem skin microbiomes and microbiomes from hand-held objects (e.g. phones, doorknobs) were collected to trace the associations between individuals and objects. In 16 death scenes, they swabbed the right palm of the decedent and personal objects at different times: on the scene of death, upon arrival at the morgue, and at 6-hour intervals thereafter until autopsy or external examination. A total of 98 objects were swabbed at the 16 death scenes, 88 of which yielded sufficient genetic material for sequencing. Postmortem skin microbiomes correctly associated with objects at an average accuracy rate of 75%, but the level of accuracy varied by scene. The observed variation was explained due to the time elapsed since the object was last touched, handling by other individuals and the nature of the objects, that could inhibit microbial colonization (e.g. cleansers, lubricants, and heat) . Regarding methodology of this review, the lack of consistency and the heterogeneity of the studies, as well as the outlined limitations, preclude the performance of a meta-analysis. Furthermore, performing a quality assessment of the included studies was not feasible due to the vast range of study methodologies and the broad spectrum of definitions. According to the Locard’s Exchange principle which posits that “every contact leaves a trace” , human microbial communities have been studied to understand their role in binding an individual to the surrounding environment, as a “personal microbial cloud” . Fierer et al. conducted three studies to demonstrate the potential utility of human microbiome for forensic identification. In the first one, they compared bacterial communities on individual keys of three computer keyboards to the communities found on the fingers of the keyboard owners. In the second one, they linked objects to specific individuals by comparing the bacteria on their computer mice against a database containing bacterial community information for more than 250 hand surfaces, including the hand of the owner. Analyzing bacterial 16 S rRNA gene sequences, they found a degree of similarity between bacterial communities (represented in plots generated using the pairwise unweighted and weighted Unifrac distances) on fingertips of the three individuals sampled and their respective keyboards. They also demonstrated that the fingertips of an individual bacterial communities are more similar to those found on the keys of that individual’s keyboard than to those communities found on key-board keys not touched by the individual. In the last study, they aimed to determine whether bacteria on a personal object more closely resembled the owner’s skin bacteria than those of the general population. They calculated the phylogenetic distance between the bacterial communities on 9 personal computer mice and mouse owner’s hand, comparing it to the distances between the mouse bacterial communities and the communities on 270 hands that had never touched the mouse. They found that in nearly all cases the bacterial community on a given mouse was significantly more similar (using the unweighted and weighted Unifrac distances) to those on the owner’s hand than to the hands in the database. They found that there was a similarity between the microbiome present on personal items and the subject to which they belonged, suggesting direct transfer of bacteria from fingertips. In all nine cases, the bacterial community on each mouse was significantly similar to the community in the owner’s hand than in the other hands in the database. The study also considered the effect of storage conditions on collected skin-associated bacterial communities, revealing that these conditions had little to no influence on bacterial community composition for up to 14 days. Regarding this point, laboratory conditions as typical of indoor environments (temperature at 20 °C and fluorescent lighting on for 8 h a day), although necessary for the study, differed significantly from the reality. Despite these conclusions, the sample size and the selection of individuals who worked within the same building (two individuals from the keyboard study shared the same office space) could represent limitation to the forensic application . In their studies, Schmedes et al. firstly collected samples from 14 skin body sites from 12 healthy individuals sampled at three time points over a 2.5-year period. They identified stable clade-specific markers that provided individualizing resolution at each body site. It was based on skin microbiome profiles generated using nucleotide diversity (i.e., a measure of strain-level heterogeneity of the microbial population) of each marker. They used Proprionibacterium acnes pangenome presence/absence features and the nucleotide diversities of clade-specific markers to identify stable features which can be used to attribute skin microbiomes from multiple body sites to their respective hosts. The manubrium site and the hypothenar palm yield highly accurate rates of classification (97% and 96% respectively). Nucleotide diversity of stable markers reached accuracies as high as 100% from cheek, inguinal crease and popliteal fossa and contributed significantly greater to classification accuracies than presence/absence features ( p < 0.01) . They also developed a novel targeted sequencing model, the hidSkinPlaex, to attribute skin microbiomes collected from eight individuals from three body sites (i.e., foot, hand and manubrium) to their host donor. Three replicate samples were collected from each body site for a total of nine swabs collected per individual ( n = 72). The panel consisted of 286 clade-specific markers from 22 bacterial with > 65% of the markers from P. acnes. Skin microbiome profiles were assessed using subsets of universal (i.e., markers common to all individuals and body sites) and non-universal markers (i.e., all markers present across all samples). The comparison between these two categories showed an accuracy (i.e. the percentage of samples classified correctly) higher and statistically positive ( p < 0.00001) using enriched hidSkinPlex markers from foot microbiome, as opposed to markers from shotgun data. Enrichment of hidSkinPlex markers provided the capability to identify skin microbiomes from individuals when the body site was unknown to the classifier with up to 97% accuracy using markers shared across the three body sites. It also gave the ability to identify the body site origin of the skin microbiome sample with up to 86% accuracy. Thus, the hidSkinPlex could serve a dual purpose, providing a method to not only identify individuals but also predict the body site origin of skin microbiome samples . These studies have highlighted these principal limitations, also reported by the authors: the laboratory bacterial contamination, the sharing of microbial communities between individuals (for example cohabiting couples and family members), the need to analyze further markers of bacterial genus and the stability of skin microbiomes collected over time intervals, the latter not analyzed in this study. Park et al. collected samples from 15 individuals (right-handed and healthy, 4 smoking and one who had taken an antibiotic), exploring microbial communities inhabiting their palms obtained by hand-printing and using culture-based methods. A total of 686 bacterial strains were isolated (only with aerobic cultivation) and identified based on 16 S rRNA gene sequence analysis. The genus Staphylococcus was detected in all participants, and Micrococcus and Enhydrobacter were detected in most participants (87% and 80% of the cases, respectively). Despite the small number of the sample, some minor species were unique for specific individuals. They concluded explaining that some major species could also applied as molecular biological markers at subspecies level and minor species could be potential used to human identification. The sample size and the inclusion of individuals with characteristics that could have influenced the results represent major limitations. For example, smoke and taking antibiotics were not explored by the authors . Watanabe et al. investigated the contribution of minor skin taxa to the effectiveness of personal identification, selecting the forehead microbiome as a skin microbiome model, due to the presumed minor contact of this part of the body with objects or other individuals (considering some skin parameters as moisture, pH and sebum). They recruited 11 individuals (original dataset) and collected 66 forehead microbiome samples at six different time points over two years (33 samples each year). To assess the microbial taxonomic composition of each sample, the 16 S rDNA were PCR amplified. They calculated the Canberra distance between a query sample (unknown individuals) and reference samples (known individuals). They evaluated a personal identification accuracy of 95% (63/66). Moreover, they tested the accuracy acquiring data in different years. Using 3 reference samples from the first year and 3 query samples from the second year, they found the accuracy to be 85% (28/33). Furthermore, they evaluated the method using a public dataset (89 individuals) and calculated a personal identification accuracy of 78% (663/837), noting that the accuracy of personal identification increased with higher reference samples per individuals. In this study authors revealed that the taxonomic composition of the skin microbiome was mostly stable over a short period (i.e. up to a few months) but fluctuated slightly over extended periods (i.e. >1 year), suggesting that the intra-individual taxonomic composition of the human skin microbial community was relatively stable. Despite these promising results, the stability of the microbiome should be studied over longer periods of time, using a larger number of individuals and testing other body parts, considering all specific influencing factors. In fact, this is one of the few studies that uses the forehead as a source of microbiome, which has been hypothesized to be less influenced by external contact (e.g. sebum production). On the contrary, more studies on larger populations should verify the influence of other factors on bacterial communities (including those proposed to the authors themselves) . Neckovic et al. considered the potential for human skin microbiomes to be transferred between non-cohabitating individuals, and from an individual to substrates, through direct and indirect contact. They involved six participants placed into three pairs, taking part in direct and indirect modes of transfer. The first mode was measured through the act of a handshake with another individual, followed by contact with a substrate. The second mode involved individuals rubbing a substrate in their left hand, swapping substrates with their partner and then rubbing the swapped substrate in their left hand. A total of 65 samples underwent 16 S rRNA sequencing. The Jaccard distances (a proximity measurement used to compute the similarity between two objects: a value of 0 is indicated as the distance between a sample and itself, whereas a value closer to 1 would indicate a greater distance and therefore, less similarity in microbial community composition) between the reference samples of each participant were all greater than 0.8, meaning there was dissimilarity in the microbial compositions of the skin microbiomes between participants. Each individual reference sample was observed to cluster either within or around the samples of each respective pair, exhibiting closer distances to their mixed samples than to those belonging to another participant pair. The statistical results, illustrated in plots and based on Jaccard and unweighted UniFrac distances between samples, revealed distinct clustering of participant pairs. This suggested that, following direct or indirect transfer of hand-associated microbiomes, this form of analysis may be used to associate individuals with other individuals and/or substrates. The forensic application of the results could be hindered by some elements, first of all, the short sampling time (within three days) which does not allow the transitions or variation in microbial to be appropriately assessed. Furthermore, several factors (such as the relative surface areas contacting each other, the level of pressure and friction applied during the contact, and the duration of the contact) that may influence the microbiome detected on hands and skin should be taken into consideration. Finally, for the purposes of applicability to the real context, contamination risks associated with all people or objects that came into direct contact with the skin/body site in a specified period and with the type of interaction should be considered. These results should also be integrated with the introduction of negative controls, i.e. free from contaminating microbial DNA . Lax et al. recruited two participants to sample their phones, soles of their shoes and the floor over the course of two 12-hour time periods on two consecutive days. A further 89 participants took individual samples of their shoes and phones at three different scientific conferences. Random forest models were used to determine which of the two individuals’ shoes a sample was taken from, correctly classifying samples more than 50 times as effectively as one would expect by chance. In phone samples, the models were able to classify the participant a phone sample was taken from (error ratio of 13.6). Random forest models were able to determine which of the three conferences a sample was taken from significantly better than expected by chance for both the shoe and phone environments (error ratio = 11.7 and 8.0, respectively). Regarding the stability of microbial community, they analyzed the dissimilarity in community composition and considered the phylogenetic distance. They assessed that phone-associated microbial communities were observed to be both less stable (higher median distance) and more variable in their rate of change over time (broader distribution) than shoe-associated communities. They hypothesized that the high volatility of phone-associated microbial communities was likely due to a small microbial biomass that would be prone to a rapid turnover in community composition and to the very high volatility of hand-associated microbiota . They showed temporal variability in the differentiation in the shoe microbial communities of these two different people. In contrast, the models were unable to determine the specific site where the sample had been taken (for all substrates analyzed). They hypothesized that this was due to the homogenization of communities across the shoe sole over time or to rapid changes in community structure at each sampling site. This study suggests how the microbiome can be used to trace objects to their owners and to lead an individual back to a place. The short sampling time (two days), the small sample and the few substrates analyzed (telephone, floor and sole) represent the major limitations. Furthermore, it should be explored the surface-associated microbial community and whether shoe sole material and turnover could influence bacterial communities. Meadow et al. characterized microbial communities on seventeen individuals’ smartphone touchscreens sampled from the touch-surfaces of their own mobile phone, as well as their own thumb and index finger on their dominant hand (3 samples for each of 17 participants, with a total of 51 samples). They found that the two fingers from each participant had significantly more in common than either did with phones ( p < 0.001 for both fingers). Handwashing made an insignificant difference in the resemblance of the two fingers ( p = 0.126) and in the finger/phone connection ( p = 0.7). Women’s fingers appeared to share more operational taxonomic units (OTUs) with their phones than men, but the difference was not significant ( p = 0.128), since both shared more OTUs on average, with their own phones than with anyone else’s. Indeed, they found that an individual’s finger shared on average 5% more OTUs with his or her own phone than with everyone else’s phones ( p < 0.001). The author explained several limitations of their study: sample size, the design of the study as a teaching exercise, the lack of information about the environmental process of breeding microbes on phone’s touchscreen and the factors that could influence it (e.g., material type, temperature, pH, humidity, exposure to ultraviolet light and substrate availability). Furthermore, the authors only considered mobile phones equipped with touchscreens (smartphones) and not those equipped with a keyboard, neither they distinguished hand washing methods that could also influence the results . Costello et al. conducted a study on the spatial and temporal distribution of the human microbiota, surveying bacteria from up to 27 sites in 7–9 adults on four occasions. They collected 815 samples and found each habitat harboring a characteristic microbiota and a relative stable set of abundant taxa across people and over time. They assessed differences in overall bacterial community composition using UniFrac metric (a small distance implies that two communities are similar). For each sample, variable region 2 (V2) of the bacterial 16 S rRNA gene was PCR-amplified. They detected a characteristic microbiota for each habitat and a relatively stable set of abundant taxa across people and over time. Indeed, they found that composition varied significantly less within habitats than between habitats. Within habitats, variation was significantly less within individuals sampled over time than between individuals on a given day. After accounting for habitat and host individual, variation was significantly less over 24 h than over 3 months ( p < 0.01). Despite the strong inter- and intrapersonal structuring of bacterial diversity, a high degree of spatial and temporal variability was also evident: about 12% of phylotypes appeared on all dates, while 3% of phylotypes appeared in all individuals, and only 0.1% of phylotypes appeared in all body habitats . Despite these results, a longer observation period and studies on influencing factors like local chemistry and nutrient availability are needed. For example, the forehead has been identified as a more susceptible site to external factors (mainly the production of sebum). Phan et al. investigated how the bacterial profile could be used as an indicator of donor characteristics such as sex and ethnicity. In their study, forty-five individuals were asked to hold an autoclave-sterilised playing card, which was subsequently swabbed and the samples collected over the course of two weeks. The difference in microbiota diversity was examined using weighted (quantitative assessment) and unweighted (qualitative assessment) UniFrac distances. They found Alloiococcus species could be a potential biomarker for sex (64% accuracy rate, indicating male donor) and ethnicity (56% accuracy rate, indicating donors of Caucasian and mixed ethnicities). In addition, other characteristics (including diet and use of hand sanitizer) were also investigated. Analysis showed Lactococcus as a marker for Chinese diet type with a 48% prediction accuracy rate. Finally, concerning the use of hand sanitisers, Alloiococcus was present in only 43% of the bacterial traces from donors who used hand sanitisers compared to being present in 72% of the bacterial traces from donors who did not use hand sanitizers, with a 51% accuracy rate ( p = 0.003 for unweighted Unifrac distances of microbial community) . The limitations highlighted in this article were the sample size, the large standard deviation in samples, the bias of subjects introduced (all university students) and the low robustness of the predictive models for most features tested, such as sex. Furthermore, in this study no information was noted about the history of contact of the subjects with other objects or the presence of cohabitants or coexistence with pets. More in-depth analyzes could also reveal similar results using a different substrate. Finally, as this study examined a single time point, it is unknown whether any of the identified bacteria would remain or be in similar abundance in subsequent sampling. Expanding the sample size, diversity of the subjects and temporal scope would yield a greater wealth of information on the potential links between microbial signatures and donor characteristics of forensic interest . Regarding sex, Richardson et al. collected personal samples from the hands and other objects in the room of 37 students living in a common dormitory, distributed across 28 distinct dorm rooms. Through the study of specific microbial taxa, they identified the sex of the subject with the DESeq2, a statistical method for differential analysis of count data, using shrinkage estimation for scatters and fold changes to improve stability and interpretation of estimates. Examining Lactobacillus and Corynebacteria species, the random forest model was able to predict whether a subject was male or female, with an error ratio of about 2.5, and accuracy of around 80% on the test set. The major limitation of this study was the presence of roommates, since interactions between individuals involve an exchange of bacterial communities and therefore a decrease of differences in taxon abundance. In this study, an individual’s classification error was linearly related to the number of roommates that individual had, with classification error increasing by 18% points for each additional roommate . Fierer et al. collected samples from the palmar surfaces of both the hands of 51 students to characterize bacterial diversity on hands and to assess its variability within and between individuals. They observed intra- and interpersonal variation in bacterial community composition: hands from the same individual shared only 17% of their phylotypes, with different individuals sharing only 13%. This intraindividual differentiation between the bacterial communities on left and right hands was not significantly affected by handedness, sex, or hand hygiene ( p < 0.05 in all cases). Men and women harbor significantly different bacterial communities on their hand surfaces ( p < 0.001). In this article the limitations could be the presence of a sample restricted to a population of students and the lack of detailed information on the skin characteristics of the sampled individuals. So, it could be difficult to understand whether sex differences in bacterial communities on the hands may be due to skin factors, for example pH, sweat or sebum production, frequency of moisturizer or cosmetics application, skin thickness or hormone production . Bell et al. examined thanatomicrobiome (i.e., postmortem microbiome) by collecting heart samples from 10 individuals who died of sudden cardiac arrest with times of death ranging from 6 to 56 h. They amplified the V1-V2 and V4 hypervariable regions of prokaryotic 16 S RNA genes. Individual OTUs were examined and the relative abundances of the most abundant microbial taxa in all samples relative to region (V1-2 and V4) were determined. Their study revealed a distinction in the heart thanatomicrobiome of male and female corpses at all taxonomic levels. For example, at the order level, Lactobacillales and Rhizobiales were only detected in males and Pseudomondales in females. Their results showed that sex-dependent changes in the thanatomicrobiome composition were statistically significant ( p < 0.005). In this study, apart from the small sample size, the major limitation is due to the lack of in-depth analysis of the variability of the bacterial community based on the time elapsed since death. Furthermore, because of the only substrate used was the heart, these results should be validated taking other substrates into consideration . Tridico et al. surveyed bacterial communities, associated with human scalp and pubic hair, from seven healthy Caucasian individuals of both sexes (two of whom were in a relationship), ranging in age from 23 to 53 years old. Samples were collected at three time points, initial collection in addition to 2 and 5 months thereafter. Forty-two pools of DNA extracts were obtained from human scalp and pubic hairs. Data generated from pubic hair held (using next generation sequencing) revealed a dichotomy between OTUs on male and female pubic hair shafts. Lactobacillus spp. was found in the female pubic hair samples and not in the male samples (excepting in the co-habiting male). Instead, similar microbial taxa were observed in the cohabiting couple, suggesting interindividual transfer, especially after sexual intercourse. In contrast to the pubic hairs, scalp hair microbiota showed no correlation with the sex of the donor. Moreover, pubic hair microbiomes appeared to be less influenced by environmental bacteria than scalp hair . The temporal stability study found that pubic hair bacteria may be more temporally stable than scalp hair bacteria and therefore potentially of greater evidentiary value than scalp hair bacteria. Data showed that about 17% of pubic hair bacterial OTUs were temporally stable at all time points; while, on average, scalp hair hosted approximately 5% bacterial OTUs. Despite these findings, more studies should be conducted on the role of bacterial transfer during contact, the temporal persistence of bacteria after transfer and sample storage conditions. In fact, the forensic application could also be useful in cases of suspected sexual violence. The temporal persistence of the bacterial community on the pubic hair should be studied, especially since the examination on the victim is often not carried out acutely. Pechal et al. studied the thanatomicrobiome as a sign antemortem health condition, which could be used to complement the biological profile. They analyzed microbial taxonomic profiles from a total of 83 cases (less than 24 h postmortem), divided into two groups: cases with evidence of heart disease detected during autopsy and cases with death resulting from violent. Heart disease was based on examination of the heart (including microscopic analysis) and medical history. To assess whether there were statistical associations between the postmortem microbiome and antemortem health status, they ran binomial logistic regression models to contrast community diversity with heart disease. They examined the bacterial community from the mouth, finding phylogenetic diversity in cases of heart disease (with significant predictive factor, P = 0.038). In contrast, individuals whose death was due to violent circumstances had greater microbial diversity. These data suggested that increased microbial biodiversity may be an indicator of individuals without chronic health conditions, such as heart disease. This study could be biased by the age of the subjects included (in the original dataset, 44 ± 15 years), as heart diseases typically appear later in life and are chronic conditions, whereas violent deaths tend to involve younger individuals. Studies evaluating the bacterial community at multiple collection times (for example, near, at and after death) should be conducted . In 2010 the Earth Microbiome Project (EMP) was founded . It represents a systematic attempt to characterize global microbial taxonomy with the aim of understanding biogeographical variations and the factors, such as climate, altitude, latitude, or soil nature, that determine them. Indeed, the characterization of the microbiome may provide information on the geographical origin of the individual. In their study, Nagasawa et al. developed a method to determine the geographic origin of 17 cadavers with known geographic origins by examining the H. pylori vacA region polymorphism. VacA is a cytotoxin that comprises two variable parts in the region of the vacA gene: the s-region (s1 and s2) and the m-region (m1 and m2). East Asian H. pylori strains are associated with the vacA s1 type; within East Asian countries, the m1 type predominates in Japan and Korea, whereas the prevalence of the m2 type gradually increases in the southern parts of East Asia. Phylogenetic tree of H. pylori showed 3 major clusters consisting of the East Asian type I, including Japan, China and South Korea, the Western type II, including Russia, the Americas and Europe, and the Southeast Asia type III, including Thailand, Hong Kong, Taiwan, and Vietnam. All the Japanese ( n = 10), South Korean ( n = 1), and Chinese ( n = 2) cadavers examined in the present study were classified as type I, the single Thai cadaver was classified as type III, and the single Afghan and Filipino-Western cadavers were classified as type II. Even if Filipinos and Taiwanese are typically classified in the type III cluster, different classification in this study could be due to external factors. In fact, the Taiwanese cadaver was classified as type I, probably since the individual was recorded as being an ethnic Taiwanese, but had lived in Japan from childhood. These findings demonstrate the influence of geographic and latent origin of the cadavers on this method. These considerations recall the difference between geographical origin and ethnicity, which is still provided by the analysis of the polymorphism of the human genome. More studies should be conducted including more geographical origins and knowing the background details of the analyzed sample, mostly unknown in this article . Escobar et al. described the composition of the gut microbiota comparing Colombian adults with different geographic origin (USA, Europe, Japan and South Korea). They included a total of 126 individuals, of which 30 were Colombian. Each participant collected a fecal sample. They found that the gut microbiota of Colombians was mostly composed of Firmicutes (average ± SD: 79 ± 13%) and Bacteroidetes (17 ± 12%), followed by other phyla present in minor frequencies. The remaining datasets had lower proportions of Firmicutes and higher proportions of Bacteroidetes but dispersion of data among individuals was equally notorious than in the Colombian dataset. The UniFrac analysis indicated that the gut microbiota of Colombians was significantly different from that of Americans, Europeans, and Asians ( p = 0.001). Moreover, they found that the relative abundance of Firmicutes decreased with latitude ( p = 0.002) and that of Bacteroidetes increased with latitude ( p = 0.001). The authors highlighted that the sample size was not designed to achieve statistical power due to the lack of previous data on Colombians and the highly variable results of studies performed on other populations. Moreover, due to the influence between geographic origin and diet, they concluded that it would be interesting to tease apart the effect of diet and geography on the composition of the gut microbiota . Brinkac et al. conducted a study comparing the variation in scalp and pubic hair microbiome across different geographic origins. They collected hair samples derived from scalp and pubic areas from adults residing in Maryland (MD, n = 8) and California (CA, n = 8). Additionally, scalp hairs were collected from adults residing in Virginia (VA, n = 5). Each individual provided multiple samples for a total of 42 and 32 hair samples from scalp and pubis respectively. They observed that the Peptoniphilus and Staphylococcus genera had different sample abundances between MD and CA, with no significant clustering by geographic location in each hair type. Compared to hair, the analysis of pubic hair revealed a higher error rate (22.58% compared to 17.24% for hair), suggesting that scalp hair had greater geolocation prediction power than pubic hair. More studies should be included to understand hair characteristics that may influence these results: for example, length, hair collection technique (cut or plucked), sebum production, environmental or lifestyle factors. Increasing sample sizes and performing longitudinal studies would help further clarify the usefulness of both scalp and pubic hair as indicators of forensic information . The human microbiome has been hypothesized to be potentially useful in studies investigating its transfer during sexual contact. Ghemrawi et al. described genital microbial signatures based on the analysis of five male and five female genital samples (for a total of 10 samples) and compared these results to those from longitudinal studies. They did not include couples in the study, and no information was collected regarding recent intercourse. The shotgun sequencing results showed taxonomic diversity and richness of the penile microbiome, as opposed to the vaginal ones which were composed predominantly of lactobacilli (about 76% of the total vaginal composition). The authors classified this study as a “pilot study,” which should be complemented with a larger sample and longitudinal studies. In fact, some factors that could have influenced the results should be connected: on collection time, the absence of information on previous sexual intercourse and the presence of other variables (for example circumcision or the day of the menstrual cycle on which the sample was taken) . Williams et al. collected microbiome profiles from pubic hairs and/or swabs taken from the pubic mound region of 43 participants (including 12 partner pairs). Participants provided 1 to 5 sets of sample collections (in 3 set time points) resulting in 155 completed sample collections. Individuals were stratified based on many characteristics, such as sex, age, ethnicity, sexual activity, condom use, and oral to genital contact. Results showed that the two couples who did not report sex in the seven days prior to sample collection for any of the time points were the only couples whose male and female samples consistently fell into separate clusters About the influence of the level of sexual activity, they found a significant correlation between the proportion of couple co- clustering and the average number of times the couple reported having sex during the seven days preceding each sample collection. Increased frequency of sexual activity didn’t however guarantee increased microbiome similarity (for instance, two couples were similarly sexually active but clustered together 33% and 80% of the time, respectively). This result established that sexual activity per se was not sufficient to ensure microbiome sample sharing and made it unlikely that a single incidence of intercourse could always result in detectable transfer. This study would require a larger sample size and greater control over some variables. For example, we do not know whether voluntary sexual contact may have different characteristics than that conducted by force. Furthermore, controlled studies involving collection of samples immediately prior to sexual contact and then at fixed time points after it would serve to quantify the variability in proportion of transfer both to hairs and the pubic mound, and for how long any mixing is retained . Dixon et al. studied the variation of bacterial communities in six male-female sexual partner pairs before and after sexual intercourse, also controlling for female cyclic variation and selecting strict parameters to simulate a single episode of penetrative sexual encounter. Five replicate swabs (penil skin and vaginal) were collected for each participant and timepoint, totaling 20 per couple. (10 male, 10 female). Taxonomic analysis found that in both male and female samples, there was an increase in the total genera observed post-coitus. The most notable change in abundance postcoitus was the increase in male samples of the dominant female taxa, Lactobacillus. Few changes were observed in female. In three female samples, an increase in the distance between the samples was observed before and after coitus, while the male samples observed a progressive clustering after coitus. In contrast, in a pair, female before-and-after samples are tightly clustered, while male samples have a larger distance between each other. The authors hypothesized that both the male and female genital microbiomes might be susceptible to alteration by the opposite sex. Despite these results, the authors highlighted some limitations. They did not know what specific intimate behaviors occurred during their sexual encounter, making it difficult to hypothesize a relationship between microbial diversity and intercourse effect. Therefore, larger study groups should analyze circumcision as a penile skin variable and evaluate additional time points to assess microbiome recovery. Finally, they did not consider that the partners could be sexually active during menstruation, and it is also conceivable that the volunteers did not respect the abstinence period. They should also collect more information on people’s health and the time of sampling, in order to reduce accidental factors or contamination . Since bite mark injuries could be present in sexual abuse, Kennedy et al. assessed the matching oral streptococcal DNA sequences from bite marks to those obtained from the teeth responsible. They also evaluate the capability of three genomic regions of streptococcal DNA to discriminate between participant samples. They enrolled 16 individuals who generated self-inflicted bites on their upper arms. The following genetic targets were examined: the hypervariable regional 9 of streptococcal 16 S rRNA gene, a stretch of noncoding DNA located between the 16 S and 23 S rRNA genes (ITS), and a stretch encoding the beta subunit of bacterial RNA polymerase (rpoB). The 16 S rRNA model revealed a sensitivity of 100%, with a 25% false positive rate. The ITS model found a 65% chance of obtaining a false positive. Finally, the rpoB model matched all bite marks to the corresponding teeth. Upon achieving perfect discrimination, they demonstrated the complete ability to differentiate between samples from teeth responsible for a bite and those not responsible. A major limitation of this study, besides the sample size, is that the bite marks were self-inflicted, and it did not analyze how diseases affecting dental elements, such as cavities, could lead to microbiome variability. Furthermore, adapting this study to reality, it is not known how long the microbiome left by the bite can survive temporarily and whether it can be influenced by a microbiome not only from a different site of the body, but from another individual . For the human microbiome to be effectively applied to identification in forensic science, it must exhibit temporal stability and specificity to particular body sites and to sex. Furthermore, the mechanisms that involve the transfer should be explored in depth, so that the variables that may influence the changes can be predicted. These variables are divided into environmental factors, lifestyle choices, and internal factors, which also include the subject’s state of health. In the selected studies, specific limitations were identified and described. Furthermore, all studies share the following limitations: the instability of the microbiome to intrinsic and extrinsic factors, for example the use of antibiotics or the presence within the subject of a certain disease or hormonal factors that modify the microbiome; the difficulties in maintaining the ideal conditions in carrying out the sampling, transport, and treatment of the microbial community: in fact, different microbial populations may require different protocols. Furthermore, according to what is accepted by the scientific community, a valid protocol should have been tested in field conditions, whether it has been subjected to peer review, whether the rate of error is known, standardization and whether it has been generally accepted. This scientific methodology is the way to ensure the reproducibility and comparability of research results to be applied in concrete cases of judicial investigation (with the same safeguards in terms of privacy and confidentiality as any other human tissue samples or identifying sources of information); the presence of contaminants of human, environmental and other living beings, such as animals and insects; the sample size of the studies presented in this review, which appears to be still too small; the standardized definition of changes in the microbial community in the postmortem period. The postmortem human microbiome, such as Javan et al. reported, includes two components: the thanatomicrobiome, consisting of microbes that inhabit internal organs and body fluids after death, and epinecrotic microbial communities, represented by microbes found on the surface of decaying remains . In fact, the thanatomicrobiome is conditioned by many endogenous and exogenous factors, including climatic conditions and the presence of animals, as well as by postmortem translocation and agonal diffusion phenomena; almost all of the studies presented (with the exception of the study by Kodama et al. ) were constructed according to a rigid design to control interfering variables (e.g., hand washing in subjects, voluntary recruitment of subjects). Nonetheless, such a study design could be difficult to adapt to real forensic applications; many studies should be conducted to identify a common matrix (i.e. a sampling site) less influenced by external and internal factors. For example, hands represent a useful site because they are more involved in contacts, but also more susceptible to confounding factors. On the contrary, for example the forehead is less susceptible to external contacts but influenced by individual factors (for example sebum production); robust information on the stability of the microbiome over time is lacking. Some studies included in this review explore these differences by including some time set points. However, information on baseline time and long term is often lacking. Furthermore, it remains unclear how long microbial fingerprints can last unchanged on skin and surfaces, making it possible to analyze these traces and obtain reliable results. It was demonstrated, in fact, that skin microbiota shed by an individual can change over time, undergoing degradation within hours. A temporal variation was observed in human skin microbial composition. These factors can constitute a great limit to microbiome-based methods of human identification. Kodama et al. conducted a study where postmortem skin microbiomes and microbiomes from hand-held objects (e.g. phones, doorknobs) were collected to trace the associations between individuals and objects. In 16 death scenes, they swabbed the right palm of the decedent and personal objects at different times: on the scene of death, upon arrival at the morgue, and at 6-hour intervals thereafter until autopsy or external examination. A total of 98 objects were swabbed at the 16 death scenes, 88 of which yielded sufficient genetic material for sequencing. Postmortem skin microbiomes correctly associated with objects at an average accuracy rate of 75%, but the level of accuracy varied by scene. The observed variation was explained due to the time elapsed since the object was last touched, handling by other individuals and the nature of the objects, that could inhibit microbial colonization (e.g. cleansers, lubricants, and heat) . Regarding methodology of this review, the lack of consistency and the heterogeneity of the studies, as well as the outlined limitations, preclude the performance of a meta-analysis. Furthermore, performing a quality assessment of the included studies was not feasible due to the vast range of study methodologies and the broad spectrum of definitions. Since it is not always possible to achieve forensic identification based on traditional sciences , the microbiome has recently been studied as an alternative method. Despite the recognition of a potential use, there are still many limitations that do not allow us to reach a degree of probability of identification useful for establishing evidence, especially at Court. Even if there are few protocols for postmortem procedures by the experts in the field , there is a lack of knowledge and sharing at territorial level and too much disparity among the various ways of operating in forensics, as for other field of forensic sciences . Today, forensic microbiology could serve as a supplementary tool, combined with traditional techniques, to potentially reveal more information about the individual in question . The creation of a forensic microbiome “biobank” could facilitate the improvement of technologies for isolating and analyzing bacterial organisms, the development of a set of reference microbial genome sequences, the provision of new computational analysis tools for organisms, and the advancement of sequencing technologies. |
Identification of Key Molecular Pathways and Associated Genes as Targets to Overcome Radiotherapy Resistance Using a Combination of Radiotherapy and Immunotherapy in Glioma Patients | 2d4d1e62-fe67-4d60-81c8-a966817354dc | 10931693 | Internal Medicine[mh] | Despite the impressive advances of cancer immunotherapy via immune checkpoint inhibitors (ICIs) in treating many types of cancer in the last decade, diffuse low-grade glioma (LGG) (grade II/III) is still largely incurable, which induces profound disability and high mortality. More than half of LGGs evolve and progress to grade IV glioma (glioblastoma multiforme, GBM), which has a dreadful prognosis with a median survival of less than two years. A leading hypothesis for the lack of efficacy of ICI-based immunotherapies in diffuse gliomas is that the commonly used ICIs may have focused on the wrong targets in gliomas. As is well known, the commonly used ICIs such as nivolumab, pembrolizumab, and ipilimumab all reduce immune suppression mediated by regulatory T-cells (Tregs) through blockade of either the PD1/PD-L1 axis or the CTLA4. However, there is strong evidence indicating that immune suppression in gliomas is predominantly performed by tumor-associated macrophages (TAMs), not Tregs. It is known that macrophages can be polarized toward a pro-inflammatory/antitumor phenotype (M1) or an anti-inflammatory/immune suppressive phenotype (M2). Gliomas and other cancers have been shown to have the capacity for polarizing macrophages toward the pro-tumor M2 phenotype. In particular, the colony-stimulating factor-1 receptor (CSF-1R or CD115) plays an important role in macrophage development and polarization. Specifically, binding of CSF-1 to CSF-1R triggers auto-phosphorylation of the receptor on several tyrosine residues, and this can activate multiple intracellular pathways, including the phosphatidyl inositol 3-kinase (PI3K) pathway, which promotes macrophage maturation and upregulates expression of genes that lead to the pro-tumor M2 phenotype . This leads to great interest in investigating the therapeutic potential of CSF-1R inhibitors in cancer treatment. Indeed, in a glioma mouse study, the inhibition of CSF-1R with BLZ-945 resulted in a reduction of M2 polarization within the tumor microenvironment and decreased tumor growth rates . Nevertheless, a subsequent study indicated that persistent inhibition of CSF-1R alone was inadequate for long-term tumor control due to drug resistance, and glioma growth resumed after an initial period of slow proliferation . Radiotherapy (RT) has been widely used as a fundamental component of cancer treatment received by about half of all patients with cancer, including glioma . RT is traditionally delivered for purposes of local control, but many RT-treated cancer patients relapse with local tumor recurrence and distant metastasis. The perception of RT as a simple local treatment for tumors has undergone a significant transformation in recent years. It is now widely acknowledged that RT has the potential to trigger a systemic immune response and reprogram the tumor microenvironment (TME) . This provides a compelling rationale for combining RT with immunotherapies to develop novel treatments. Increasing evidence suggests that the treatment combining radiotherapy (RT) with immunotherapy via colony-stimulating factor-1 receptor (CSF-1R) inhibition is promising to improve survival over RT alone or CSF-1R inhibition alone. A promising report showed that the CSF-1R inhibitor enhanced the efficacy of RT and reduced infiltration of myeloid suppressor cells in an orthotopic and heterotopic mouse model using the human GBM cell line U251 . To further understand the dynamics of the combined RT plus CSF-1R inhibition anti-glioma therapy, Akkari et al. (2020) conducted an in-depth investigation of the dynamic changes in different tumor-associated macrophages (TAMs) in a randomized mouse study. Specifically, they explored how RT dynamically influences the relative abundance and phenotypes of brain-resident microglia (MG) and peripherally recruited monocyte-derived macrophages (MDMs) in glioma mice. The study identified radiation-specific, stage-dependent gene expression signatures for MG and MDM in murine gliomas, confirming altered expression of several genes and proteins in Notch and Hippo pathways in recurrent murine gliomas. These researchers observed that targeting these TAM populations using a CSF-1R inhibitor BLZ-945 in combination with RT could enhance the efficacy of RT and significantly improve survival in preclinical glioma models. These important findings unveil the dynamics and adaptability of distinct macrophage populations in the irradiated tumor microenvironment, offering translational potential for enhancing the effectiveness of standard-of-care treatment in gliomas. Further support for the effectiveness of the combination therapy has been provided by another independent mouse study. In fact, using an orthotopic, immunocompetent GBM mouse model, Almahariq et al. (2021) showed that inhibition of CSF-1R with BLZ-945 enhanced the efficacy of RT in the glioma treatment and resulted in significantly improved mouse survival compared to RT alone or CSF-1R inhibition alone in murine gliomas. Significantly, more than 70% of mice in the combination therapy group achieved long-term survival, as reported in the study. In summary, recent preclinical mechanistic studies have indicated that the utilization of CSF-1R inhibition, such as through BLZ-945, as a standalone treatment for gliomas, may not be adequate to achieve a significant improvement in survival. However, combining CSF-1R inhibition with RT may significantly enhance RT-induced antitumor immunity, potentially overcoming RT resistance and resulting in long-term improvement in survival outcomes in murine gliomas. Given the currently poor overall survival rates in glioma patients, an in-depth investigation of the key molecular pathways and mechanism of RT resistance and the combination therapy (RT plus BLZ-945) in these preclinical studies would be desired. Moreover, the translation of mechanistic findings to prolong the survival of human glioma patients is of great clinical significance. There has been no systematic identification of the key molecular pathways that underlie RT resistance and relevance as targets for combination therapies in humans. Here, we focused on three significant signaling pathways , which might underlie RT resistance and the improved efficacy of the combination therapy. As is well known, human gliomas are much more heterogeneous than the animal models studied. It is thus unclear whether the new mechanistic findings from the mouse studies can be translated successfully to multiple independent cohorts of human glioma patients . There is an urgent need to conduct a translational study to investigate existing evidence for the potential effectiveness of combinations of radiotherapy (RT) plus immunotherapy via CSF-1R inhibition on multiple independent cohorts of human glioma patients. Our study focused on low-grade glioma (LGG) (grade II/III) patients since studies aimed at identifying effective biomarkers for RT-treated LGG patient prognosis are still limited . Therefore, based on the pathways and mechanistic study findings in murine gliomas, it is of clinical importance to detect prognostic signatures based on DEGs and associated pathways that underlie the effect of CSF-1R inhibition among RT-treated LGG patients to optimize novel therapeutic strategies. In this paper, we first identified the key molecular pathways reflecting the mechanisms of RT resistance, and then we evaluated the utility of key genes in these identified pathways as targets for combination therapy using CSF-1R inhibition to prolong the survival of the RT-treated LGG patients. Specifically, borrowing strength from the existing RNA-seq gene expression data from an in-depth mouse study of Akkari et al. (2020) , we identified a set of differentially expressed genes (DEGs) between the monotherapy (RT) treated mice and those under the combination treatment (RT plus CSF-1R inhibition). We then translated the DEGs identified from mouse samples into those in the human genome based on the orthology mapping. Subsequently, enrichment analyses were conducted for mouse and human, separately, which identified three significantly enriched pathways, i.e., phosphoinositide 3-kinase (PI3K)/AKT pathway, Hippo pathway, and Notch pathway. Within each pathway, we identified a gene signature using the Cox regression model using RNA-seq data from a cohort of 295 irradiated LGG patients from The Cancer Genome Atlas (TCGA) database on the NCI website. As an independent cohort of the validation set to evaluate the prognostic accuracy of the Cox model based on the DEGs, we used 127 irradiated LGG patients in the Chinese Glioma Genome Atlas (CGGA). Towards this end, time-dependent ROC curves and corresponding AUCs were used to demonstrate the prognostic performance of the identified genetic biomarkers in the irradiated LGG patients and non-irradiated LGG patients, respectively. In addition, Kaplan–Meier (KM) curves were generated and compared between high versus low-risk scores. Finally, we constructed a gene signature by selecting the significant genes from the Cox models built for each of the three pathways. The identified genetic biomarkers showed high AUCs at 2-year, 3-year, and 5-year in the irradiated LGG patients in both the training cohort (TCGA) and the independent validation cohort (CGGA), indicating good predictive performance of the identified genetic signature. Elevated expression levels of the signature DEGs are highly predictive of the poor survival of RT-treated LGG patients. One can potentially lower the expressions of these DEGs using CSF-1R inhibition (e.g., using the antibody BLZ-945) among RT-treated glioma patients to achieve survival advantages, as shown in the mouse studies. Thus, the identified gene signature can potentially be used as new targets to optimize therapeutic strategies. In short, the high-impact genes can serve as druggable targets to develop novel immunotherapies for patients not responsive (or resistant) to the current radiation therapies to prolong the survival of these LGG patients. These results can potentially aid the design of human clinical trials to translate the identified mechanisms of the promising mouse studies into effective novel therapies for glioma patients by combining RT with immunotherapies via CSF-1R inhibition. We began with the experimental findings of a preclinical mouse study that revealed CSF-1R inhibition as a useful strategy to overcome radiation resistance in murine gliomas . After downloading the preclinical RNA-seq data from the Gene Expression Omnibus (GEO) website ( https://www.ncbi.nlm.nih.gov/geo/ ) with accession number GSE99537 (accessed on 30 May 2023), we selected mouse samples that undertook combination treatment (i.e., radiation plus CSF-1R inhibition) or monotherapy of radiation alone. Here, the selected mouse samples were all reported to develop resistance to radiation; however, combining CSF-1R blockade with radiotherapy was found to yield substantial improvements in overall survival in preclinical models . Based on the promising findings in mice, our objective is to identify the key molecular pathways and associated predictive genes that underlie the mechanism of the combination therapy in overcoming RT resistance in glioma mice. We then would translate these newly identified mechanistic insights from preclinical studies to human patients for the development of a promising combination of therapies involving radiation (RT) and immunotherapies, with the goal of prolonging the survival of glioma patients. For this purpose, two human glioma datasets (TCGA and CGGA) were used. Since the selected mice took either RT-only or RT plus CSF-1R inhibition, our translation would be mainly focused on the group of irradiated LGG patients. describes the workflow of our translational research strategy applied in this study. 2.1. Differentially Expressed Genes (DEG) Analysis in Mouse Data As described in , we identified the differentially expressed genes (DEGs) between the monotherapy (RT) and combination treatment groups in mice data using DESeq2 . Here, we used the Wald test and mean fit type in DESeq2, such that the significance thresholds were set as p -value < 0.05 and log2FoldChange > 0.5. Then, 285 DEGs were identified. The union of the 285 DEGs and the 693 significantly upregulated genes in both MDM and MG from yielded a total of 874 DEGs in mouse, which were translated to human for further analysis based on the orthology in both human and mouse. 2.2. Identification of Key Pathways via Enrichment Analysis in Mouse and Human To gain insight into molecular mechanisms, we performed the enrichment analysis based on the 874 DEGs for mouse and human using the KEGG database separately. The most statistically significant signaling pathway in the KEGG enrichment analysis is the PI3K/AKT pathway, which is of critical importance in CSF-1R inhibition or radiation therapy in mouse glioma models . The important roles of multiple DEGs in the Notch and Hippo pathways were also extensively discussed by Akkari et al. (2020) . Here, all three target pathways were found statistically significant in the enrichment analysis, i.e., PI3K/AKT pathway, Hippo pathway, and Notch pathway. The KEGG enrichment bar plots are given in . The results indicated that combination therapy of RT with CSF-1R inhibition therapy can target and stabilize these pathways, which were elevated in the RT-only therapy and should reflect the underlying mechanism leading to improved survival of glioma mice and human patients. We identified the pathway-related genes for further analysis. The identified genes for each pathway are given in . 2.3. Selection of Genes Predictive of Patient Survival in Each of the Key Pathways First, the univariate Cox regression model was fitted for each gene in the pathway-related gene set individually using the TCGA cohort of LGG patients treated with radiation therapy. The candidate genes (DEGs) for multivariable Cox model analysis were selected from the univariate Cox regression model if their regression coefficients were positive and p - value < 0.1 . When the number of the candidate genes was large, we performed a Lasso-based Cox regression analysis to shrink the feature set size further. Then, we used the Lasso-selected genes adjusted with clinical information (patient age and glioma grade) to fit a multivariate Cox regression model for each pathway to show the relative strength of each gene. In both the training and validation datasets, a risk score was computed for irradiated LGG patients to arrange samples in descending order. The top 50% of patients were classified as the high-risk group, while the bottom 50% were classified as the low-risk group. For easy visualization of the prognostic utility of the Cox model, we plotted Kaplan–Meier curves comparing the survival of patients with high versus low-risk scores cut at the median risk score. Subsequently, we conducted time-dependent ROC analyses to assess the predictive performance of the selected genes of each pathway. Within each pathway, we also examined the selected genes’ performance in non-irradiated LGG patients. In particular, we refitted Cox regression models with the identified signature in TCGA non-irradiated LGG subjects and re-calculated the risk score based on the refitted model in TCGA non-irradiated LGG (n = 183) and CGGA non-irradiated LGG (n = 36). 2.3.1. Identification and Evaluation of Prognostic Genes in PI3K/AKT Pathway With the selected 18 DEGs by the univariate Cox in the PI3K/AKT pathway, we conducted a Cox regression model with the Lasso penalty to shrink the feature set size further. The tuning parameter of the L1-penalty term of the Lasso Cox regression is chosen by 10-fold cross-validation. Then, the genes selected in this pathway were ITGB8, THBS4, COL9A3, and ITGA7. The risk score (RS) formula was obtained from a as follows. (1) RS = 0.222 ∗ Age + 0.777 ∗ Grade + 0.463 ∗ ITGB 8 + 0.113 ∗ THBS 4 + 0.312 ∗ COL 9 A 3 + 0.124 ∗ ITGA 7 The forest plot, together with the coefficient, p -value, and hazard ratio of the identified signature in the multivariate Cox regression model, is presented in a. The bar plot in b represents the expression of selected genes weighted by the Cox regression coefficients in mice preclinical trials. Besides age and grade, ITGB8 and COL9A3 are significant in the multivariate Cox model. The result suggests that the high expression level of these genes is associated with high hazard and poor survival, which is consistent with findings in mice where these genes are all upregulated at glioma recurrence. Therefore, it provides evidence that controlling the up-regulation of these genes in the PI3K/AKT pathway among RT-treated LGG patients using the antibody may have a high likelihood of prolonging their survival. According to the mice mechanistic study, combination therapy of RT with CSF-1R inhibition therapy can reduce the up-regulation of these candidate genes and thus would be hopeful of prolonging patient survival, which can be verified in future clinical trials. For the irradiated LGG patients, Kaplan–Meier (KM) survival curves for those two groups (high/low risk) of patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets ( a,b). Subsequently, time-dependent ROC analysis was conducted to evaluate prognostic accuracy. For the training dataset of irradiated LGG patients from TCGA, the time-dependent area under the curves (AUCs) for 2-year, 3-year, and 5-year survival rates were 0.88, 0.87, and 0.85, respectively ( c). In the independent validation dataset from CGGA, the time-dependent AUCs for 2-year, 3-year, and 5-year survival rates were 0.92, 0.84, and 0.78, respectively ( d). Also, we examined the selected genes for the non-irradiated LGG patients ( e,f). The results suggest that our identified gene signature in the PI3K/AKT pathway does not work well for non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated CGGA LGG patients ( f) was only 0.47. 2.3.2. Identification and Evaluation of Prognostic Genes in Hippo Pathway The identified genes in the Hippo pathway were TGFB2, YAP1, DCHS1, and WWTR1. The forest plot and the bar plot are provided in a,b. In Cox regression, TGFB2 and DCHS1 are associated significantly with the survival of irradiated LGG patients. The risk score (RS) formula was obtained from a as follows. (2) RS = 0.295 ∗ Age + 0.592 ∗ Grade + 0.447 ∗ TGFB 2 + 0.051 ∗ YAP 1 + 0.269 ∗ DCHS 1 + 0.281 ∗ WWTR 1 These results suggest that combination therapy of RT with CSF-1R inhibition therapy has the potential to reduce the up-regulation of these identified genes in the Hippo pathway and prolong glioma patients’ survival. With the identified genes in the Hippo pathway, the KM curves ( a,b) for those two groups (high/low risk) of irradiated LGG patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets. We conducted a time-dependent ROC analysis in c–f. The results suggest that our identified gene signature in the Hippo pathway works well for the irradiated LGG patients but not for the non-irradiated LGG patients. In particular, for the validation data set of non-irradiated LGG patients ( f), the AUC at 5 years was 0.72, which was noticeably lower than the corresponding value of 0.87 ( d) for the validation data set of irradiated LGG patients. 2.3.3. Identification and Evaluation of the Prognostic Genes in Notch Pathway With the univariate Cox regression model, the selected genes in the Notch pathway are HES1 and JAG1. The risk score (RS) formula was obtained from a as follows: (3) RS = 0.283 ∗ Age + 0.643 ∗ Grade + 0.225 ∗ HES 1 + 0.517 ∗ JAG 1 In this pathway, only JAG1 is significant in the multivariate Cox model ( a). Also, b indicates that the identified genes are upregulated in the monotherapy (RT) group of mice, compared with the combination treatment group. These provide compelling evidence that CSF-1R inhibition can serve as an effective treatment for irradiated LGG patients, which potentially targets the Notch pathway. We conducted Kaplan–Meier analysis and time-dependent ROC analyses to assess the performance of the identified genes in the Notch pathway with respect to predicting patient survival. As shown in a–f, the identified signature in this pathway works well for the irradiated LGG patients; however, it still does not work for the non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated LGG patients of CGGA ( f) was only 0.32. 2.4. Predictive Performance of the Identified Significant Genes from Three Pathways We constructed a gene signature that collected the significant genes ( p - value < 0.02 ) from the above three pathways, i.e., the PI3K/AKT pathway, Hippo pathway, and Notch pathway. The predictive signature consists of 6 covariates: ITGB8, COL9A3, TGFB2, JAG1, age and grade. The risk score (RS) formula was obtained from a as follows: (4) RS = 0.255 ∗ Age + 0.495 ∗ Grade + 0.485 ∗ ITGB 8 + 0.236 ∗ COL 9 A 3 + 0.138 ∗ TGFB 2 + 0.397 ∗ JAG 1 a indicates that a high expression level of these genes is associated with poor survival of irradiated LGG patients, which is consistent with findings in mice studies where these genes are all upregulated in the monotherapy (RT-only) group ( b). Therefore, it provides evidence that CSF-1R inhibition might target these three pathways to overcome RT resistance, and controlling the up-regulation of these genes among RT-treated LGG patients may have a high likelihood of prolonging their survival. According to the mice mechanistic study, combination therapy of RT with CSF-1R inhibition therapy can reduce the up-regulation of these candidate genes and thus would be hopeful of prolonging patient survival, which can be verified in clinical trials. Importantly, in addition to evidence of up-regulation in mouse mechanistic studies, all these genes have been reported in different studies as being associated with glioma and other cancer progression. More details on these signature genes will be provided in the Discussion. Moreover, we provide the KM analysis and the bar plots that show the expression of each selected gene in MG in different treatment groups of mice in . We assessed the performance of this gene signature in LGG patients. The results of KM and time-dependent ROC analyses are provided in a–f. From these results, we can see that the identified gene signature from the three pathways shows promising predictive accuracy for irradiated LGG patients with irradiation but not for non-irradiated LGG patients. Also, the three pathways might provide useful targets for developing novel therapies in human gliomas. As described in , we identified the differentially expressed genes (DEGs) between the monotherapy (RT) and combination treatment groups in mice data using DESeq2 . Here, we used the Wald test and mean fit type in DESeq2, such that the significance thresholds were set as p -value < 0.05 and log2FoldChange > 0.5. Then, 285 DEGs were identified. The union of the 285 DEGs and the 693 significantly upregulated genes in both MDM and MG from yielded a total of 874 DEGs in mouse, which were translated to human for further analysis based on the orthology in both human and mouse. To gain insight into molecular mechanisms, we performed the enrichment analysis based on the 874 DEGs for mouse and human using the KEGG database separately. The most statistically significant signaling pathway in the KEGG enrichment analysis is the PI3K/AKT pathway, which is of critical importance in CSF-1R inhibition or radiation therapy in mouse glioma models . The important roles of multiple DEGs in the Notch and Hippo pathways were also extensively discussed by Akkari et al. (2020) . Here, all three target pathways were found statistically significant in the enrichment analysis, i.e., PI3K/AKT pathway, Hippo pathway, and Notch pathway. The KEGG enrichment bar plots are given in . The results indicated that combination therapy of RT with CSF-1R inhibition therapy can target and stabilize these pathways, which were elevated in the RT-only therapy and should reflect the underlying mechanism leading to improved survival of glioma mice and human patients. We identified the pathway-related genes for further analysis. The identified genes for each pathway are given in . First, the univariate Cox regression model was fitted for each gene in the pathway-related gene set individually using the TCGA cohort of LGG patients treated with radiation therapy. The candidate genes (DEGs) for multivariable Cox model analysis were selected from the univariate Cox regression model if their regression coefficients were positive and p - value < 0.1 . When the number of the candidate genes was large, we performed a Lasso-based Cox regression analysis to shrink the feature set size further. Then, we used the Lasso-selected genes adjusted with clinical information (patient age and glioma grade) to fit a multivariate Cox regression model for each pathway to show the relative strength of each gene. In both the training and validation datasets, a risk score was computed for irradiated LGG patients to arrange samples in descending order. The top 50% of patients were classified as the high-risk group, while the bottom 50% were classified as the low-risk group. For easy visualization of the prognostic utility of the Cox model, we plotted Kaplan–Meier curves comparing the survival of patients with high versus low-risk scores cut at the median risk score. Subsequently, we conducted time-dependent ROC analyses to assess the predictive performance of the selected genes of each pathway. Within each pathway, we also examined the selected genes’ performance in non-irradiated LGG patients. In particular, we refitted Cox regression models with the identified signature in TCGA non-irradiated LGG subjects and re-calculated the risk score based on the refitted model in TCGA non-irradiated LGG (n = 183) and CGGA non-irradiated LGG (n = 36). 2.3.1. Identification and Evaluation of Prognostic Genes in PI3K/AKT Pathway With the selected 18 DEGs by the univariate Cox in the PI3K/AKT pathway, we conducted a Cox regression model with the Lasso penalty to shrink the feature set size further. The tuning parameter of the L1-penalty term of the Lasso Cox regression is chosen by 10-fold cross-validation. Then, the genes selected in this pathway were ITGB8, THBS4, COL9A3, and ITGA7. The risk score (RS) formula was obtained from a as follows. (1) RS = 0.222 ∗ Age + 0.777 ∗ Grade + 0.463 ∗ ITGB 8 + 0.113 ∗ THBS 4 + 0.312 ∗ COL 9 A 3 + 0.124 ∗ ITGA 7 The forest plot, together with the coefficient, p -value, and hazard ratio of the identified signature in the multivariate Cox regression model, is presented in a. The bar plot in b represents the expression of selected genes weighted by the Cox regression coefficients in mice preclinical trials. Besides age and grade, ITGB8 and COL9A3 are significant in the multivariate Cox model. The result suggests that the high expression level of these genes is associated with high hazard and poor survival, which is consistent with findings in mice where these genes are all upregulated at glioma recurrence. Therefore, it provides evidence that controlling the up-regulation of these genes in the PI3K/AKT pathway among RT-treated LGG patients using the antibody may have a high likelihood of prolonging their survival. According to the mice mechanistic study, combination therapy of RT with CSF-1R inhibition therapy can reduce the up-regulation of these candidate genes and thus would be hopeful of prolonging patient survival, which can be verified in future clinical trials. For the irradiated LGG patients, Kaplan–Meier (KM) survival curves for those two groups (high/low risk) of patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets ( a,b). Subsequently, time-dependent ROC analysis was conducted to evaluate prognostic accuracy. For the training dataset of irradiated LGG patients from TCGA, the time-dependent area under the curves (AUCs) for 2-year, 3-year, and 5-year survival rates were 0.88, 0.87, and 0.85, respectively ( c). In the independent validation dataset from CGGA, the time-dependent AUCs for 2-year, 3-year, and 5-year survival rates were 0.92, 0.84, and 0.78, respectively ( d). Also, we examined the selected genes for the non-irradiated LGG patients ( e,f). The results suggest that our identified gene signature in the PI3K/AKT pathway does not work well for non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated CGGA LGG patients ( f) was only 0.47. 2.3.2. Identification and Evaluation of Prognostic Genes in Hippo Pathway The identified genes in the Hippo pathway were TGFB2, YAP1, DCHS1, and WWTR1. The forest plot and the bar plot are provided in a,b. In Cox regression, TGFB2 and DCHS1 are associated significantly with the survival of irradiated LGG patients. The risk score (RS) formula was obtained from a as follows. (2) RS = 0.295 ∗ Age + 0.592 ∗ Grade + 0.447 ∗ TGFB 2 + 0.051 ∗ YAP 1 + 0.269 ∗ DCHS 1 + 0.281 ∗ WWTR 1 These results suggest that combination therapy of RT with CSF-1R inhibition therapy has the potential to reduce the up-regulation of these identified genes in the Hippo pathway and prolong glioma patients’ survival. With the identified genes in the Hippo pathway, the KM curves ( a,b) for those two groups (high/low risk) of irradiated LGG patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets. We conducted a time-dependent ROC analysis in c–f. The results suggest that our identified gene signature in the Hippo pathway works well for the irradiated LGG patients but not for the non-irradiated LGG patients. In particular, for the validation data set of non-irradiated LGG patients ( f), the AUC at 5 years was 0.72, which was noticeably lower than the corresponding value of 0.87 ( d) for the validation data set of irradiated LGG patients. 2.3.3. Identification and Evaluation of the Prognostic Genes in Notch Pathway With the univariate Cox regression model, the selected genes in the Notch pathway are HES1 and JAG1. The risk score (RS) formula was obtained from a as follows: (3) RS = 0.283 ∗ Age + 0.643 ∗ Grade + 0.225 ∗ HES 1 + 0.517 ∗ JAG 1 In this pathway, only JAG1 is significant in the multivariate Cox model ( a). Also, b indicates that the identified genes are upregulated in the monotherapy (RT) group of mice, compared with the combination treatment group. These provide compelling evidence that CSF-1R inhibition can serve as an effective treatment for irradiated LGG patients, which potentially targets the Notch pathway. We conducted Kaplan–Meier analysis and time-dependent ROC analyses to assess the performance of the identified genes in the Notch pathway with respect to predicting patient survival. As shown in a–f, the identified signature in this pathway works well for the irradiated LGG patients; however, it still does not work for the non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated LGG patients of CGGA ( f) was only 0.32. With the selected 18 DEGs by the univariate Cox in the PI3K/AKT pathway, we conducted a Cox regression model with the Lasso penalty to shrink the feature set size further. The tuning parameter of the L1-penalty term of the Lasso Cox regression is chosen by 10-fold cross-validation. Then, the genes selected in this pathway were ITGB8, THBS4, COL9A3, and ITGA7. The risk score (RS) formula was obtained from a as follows. (1) RS = 0.222 ∗ Age + 0.777 ∗ Grade + 0.463 ∗ ITGB 8 + 0.113 ∗ THBS 4 + 0.312 ∗ COL 9 A 3 + 0.124 ∗ ITGA 7 The forest plot, together with the coefficient, p -value, and hazard ratio of the identified signature in the multivariate Cox regression model, is presented in a. The bar plot in b represents the expression of selected genes weighted by the Cox regression coefficients in mice preclinical trials. Besides age and grade, ITGB8 and COL9A3 are significant in the multivariate Cox model. The result suggests that the high expression level of these genes is associated with high hazard and poor survival, which is consistent with findings in mice where these genes are all upregulated at glioma recurrence. Therefore, it provides evidence that controlling the up-regulation of these genes in the PI3K/AKT pathway among RT-treated LGG patients using the antibody may have a high likelihood of prolonging their survival. According to the mice mechanistic study, combination therapy of RT with CSF-1R inhibition therapy can reduce the up-regulation of these candidate genes and thus would be hopeful of prolonging patient survival, which can be verified in future clinical trials. For the irradiated LGG patients, Kaplan–Meier (KM) survival curves for those two groups (high/low risk) of patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets ( a,b). Subsequently, time-dependent ROC analysis was conducted to evaluate prognostic accuracy. For the training dataset of irradiated LGG patients from TCGA, the time-dependent area under the curves (AUCs) for 2-year, 3-year, and 5-year survival rates were 0.88, 0.87, and 0.85, respectively ( c). In the independent validation dataset from CGGA, the time-dependent AUCs for 2-year, 3-year, and 5-year survival rates were 0.92, 0.84, and 0.78, respectively ( d). Also, we examined the selected genes for the non-irradiated LGG patients ( e,f). The results suggest that our identified gene signature in the PI3K/AKT pathway does not work well for non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated CGGA LGG patients ( f) was only 0.47. The identified genes in the Hippo pathway were TGFB2, YAP1, DCHS1, and WWTR1. The forest plot and the bar plot are provided in a,b. In Cox regression, TGFB2 and DCHS1 are associated significantly with the survival of irradiated LGG patients. The risk score (RS) formula was obtained from a as follows. (2) RS = 0.295 ∗ Age + 0.592 ∗ Grade + 0.447 ∗ TGFB 2 + 0.051 ∗ YAP 1 + 0.269 ∗ DCHS 1 + 0.281 ∗ WWTR 1 These results suggest that combination therapy of RT with CSF-1R inhibition therapy has the potential to reduce the up-regulation of these identified genes in the Hippo pathway and prolong glioma patients’ survival. With the identified genes in the Hippo pathway, the KM curves ( a,b) for those two groups (high/low risk) of irradiated LGG patients were well separated, and the difference was significant according to the log-rank test in both the training and validation datasets. We conducted a time-dependent ROC analysis in c–f. The results suggest that our identified gene signature in the Hippo pathway works well for the irradiated LGG patients but not for the non-irradiated LGG patients. In particular, for the validation data set of non-irradiated LGG patients ( f), the AUC at 5 years was 0.72, which was noticeably lower than the corresponding value of 0.87 ( d) for the validation data set of irradiated LGG patients. With the univariate Cox regression model, the selected genes in the Notch pathway are HES1 and JAG1. The risk score (RS) formula was obtained from a as follows: (3) RS = 0.283 ∗ Age + 0.643 ∗ Grade + 0.225 ∗ HES 1 + 0.517 ∗ JAG 1 In this pathway, only JAG1 is significant in the multivariate Cox model ( a). Also, b indicates that the identified genes are upregulated in the monotherapy (RT) group of mice, compared with the combination treatment group. These provide compelling evidence that CSF-1R inhibition can serve as an effective treatment for irradiated LGG patients, which potentially targets the Notch pathway. We conducted Kaplan–Meier analysis and time-dependent ROC analyses to assess the performance of the identified genes in the Notch pathway with respect to predicting patient survival. As shown in a–f, the identified signature in this pathway works well for the irradiated LGG patients; however, it still does not work for the non-irradiated LGG patients. For example, the AUC for 5-year survival rates in the non-irradiated LGG patients of CGGA ( f) was only 0.32. We constructed a gene signature that collected the significant genes ( p - value < 0.02 ) from the above three pathways, i.e., the PI3K/AKT pathway, Hippo pathway, and Notch pathway. The predictive signature consists of 6 covariates: ITGB8, COL9A3, TGFB2, JAG1, age and grade. The risk score (RS) formula was obtained from a as follows: (4) RS = 0.255 ∗ Age + 0.495 ∗ Grade + 0.485 ∗ ITGB 8 + 0.236 ∗ COL 9 A 3 + 0.138 ∗ TGFB 2 + 0.397 ∗ JAG 1 a indicates that a high expression level of these genes is associated with poor survival of irradiated LGG patients, which is consistent with findings in mice studies where these genes are all upregulated in the monotherapy (RT-only) group ( b). Therefore, it provides evidence that CSF-1R inhibition might target these three pathways to overcome RT resistance, and controlling the up-regulation of these genes among RT-treated LGG patients may have a high likelihood of prolonging their survival. According to the mice mechanistic study, combination therapy of RT with CSF-1R inhibition therapy can reduce the up-regulation of these candidate genes and thus would be hopeful of prolonging patient survival, which can be verified in clinical trials. Importantly, in addition to evidence of up-regulation in mouse mechanistic studies, all these genes have been reported in different studies as being associated with glioma and other cancer progression. More details on these signature genes will be provided in the Discussion. Moreover, we provide the KM analysis and the bar plots that show the expression of each selected gene in MG in different treatment groups of mice in . We assessed the performance of this gene signature in LGG patients. The results of KM and time-dependent ROC analyses are provided in a–f. From these results, we can see that the identified gene signature from the three pathways shows promising predictive accuracy for irradiated LGG patients with irradiation but not for non-irradiated LGG patients. Also, the three pathways might provide useful targets for developing novel therapies in human gliomas. Recently, preclinical mechanistic studies suggested that the treatment via combining CSF-1R inhibition (e.g., via BLZ-945) with RT can enhance RT-induced antitumor immunity and lead to long-term improvement in outcomes in murine gliomas. It is of clinical importance to identify key molecular pathways underlying the mechanism of the combination therapy and figure out whether these promising findings from the mouse studies can be successfully translated to gliomas in human patients. In this paper, we identified differentially expressed genes (DEGs) in three significant signaling pathways (PI3K/AKT pathway, Hippo pathway, and Notch pathway). Using key DEGs in the three pathways, we constructed a 4-gene predictive model to investigate resistance to radiotherapy in glioma patients and the advantages of combination therapy. This translational approach borrows the strength from available data in animal models and existing human glioma cohorts. Our Cox model results suggest that CSF-1R inhibition via BLZ-945 with RT has the potential to target the identified pathways to overcome RT resistance. The high AUCs of our identified signature indicate that the models can effectively predict the survival of irradiated LGG patients. For the combined gene signature from the three pathways, as detailed in , the time-dependent area under the curves (AUCs) in the TCGA training set for 2-year, 3-year, and 5-year survival rates were 0.89, 0.89, and 0.84, respectively ( c). In the testing CGGA dataset, the time-dependent AUCs for 2-year, 3-year, and 5-year survival rates were 0.94, 0.89, and 0.86, respectively ( d). Notably, this signature did not accurately predict the survival in non-irradiated LGG patients. For the testing CGGA dataset of non-irradiated patients, the time-dependent AUCs for 2-year, 3-year, and 5-year survival rates were 0.73, 0.75, and 0.68, respectively ( f). Similar results were obtained for the gene signature within each pathway ( f, f and f). This might indicate that the gene signature for irradiated patients does not work for non-irradiated patients and may be related to the fact that animal studies indicated limited survival advantages of CSF-1R inhibition alone in mouse glioma models . From these mouse data and Cox model results, it is reasonable to expect that the combination of radiotherapy and CSF-1R inhibition therapy can improve survival in human glioma patients. In particular, our Cox regression models and ROCs indicate that high expression levels of the signature genes can accurately predict the short survival of the RT-treated LGG patients. Furthermore, our results shed light on the mechanism of the CSF-1R inhibition to mitigate resistance to RT. Radiotherapy increases the release of CSF-1 from tumor cells and attracts macrophages to TME . The binding of CSF-1 to CSF-1R in macrophages can activate PI3K/AKT, Hippo, and Notch signaling pathways, which polarize macrophages to the pro-tumor M2 phenotype . The pro-tumor M2 phenotype macrophages can help tumor cells escape immune surveillance. Inhibition of CSF-1R can downregulate the three pathways ( b, b, b and b), which will reduce the number of macrophages polarized to the pro-tumor M2 phenotype as observed in Akkari et al. (2020), and thus improve the survival of glioma mice . One might likely use CSF-1R inhibition (e.g., via BLZ-945) in combination with RT to lower expression levels of these signature genes and prolong the survival of the LGG patients. Therefore, the three identified pathways and key molecules might serve as druggable targets via BLZ-945 to overcome the RT resistance in human patients. The three significantly enriched pathways and the related genes that we identified yielded insights into the molecular mechanism of the survival advantages of glioma mice and human LGG glioma patients using combination therapy over RT-only therapy. Each of the pathways and related genes has previously been extensively reported in the existing literature. The results of Akkari et al. (2020) suggested that the TGF-β/Hippo and Notch signaling pathways were elevated in TAMs at recurrent glioma mice, including the gene NOTCH4. However, we found that NOTCH4 was not significantly associated with patient survival in TCGA. Various studies have shown that the activation of the PI3K/AKT signaling pathway is significantly associated with resistance to CSR-1R inhibition, radiotherapy, and other therapies . Therefore, downregulation of the PI3K/AKT pathway is likely to overcome the drug resistance of RT and CSF-1R inhibition. A study demonstrated that Yes-associated protein 1 (YAP1) promotes the metastasis of U251 glioma cells by upregulating Jagged-1 (JAG1) expression and activating the Notch signal pathway . JAG1 was significantly downregulated in the RT plus CSF-1R inhibition group compared with the recurrent RT-treated mice. High levels of JAG1 and NOTCH1/DLL1 were significantly positively associated with short survival in the TCGA cohort, which was consistent with findings in literature . Importantly, it has been shown that , Notch1 signaling activity was elevated in GBM tissues, and downregulation of the Notch1 pathway by shRNA and MK0752 significantly inhibited the PI3K/AKT/mTOR signaling pathway and weakened the self-renewal, invasion, and tumor growth ability of glioma initiating cells. This is one of the potential mechanisms underlying the combination therapy (RT plus CSF-1R inhibition) that seemingly mitigated resistance to RT-only and CSF-1R inhibition-only therapy . Additionally, experimental results demonstrate that silencing JAG1 yielded a significant decrease in tumor cell proliferation in LGG cell lines, and JAG1 potentially influences PD-L1 in LGG by regulating the PI3K/AKT signaling pathway . The critical role of the Hippo pathway has been investigated in GBM, and the results suggested that the activation of YAP1/WWTR1 was associated with poor prognosis in GBM . It has been found that , ITGB8 (β8 integrin) expression was elevated in GBM stem cells and positively associated with stem cell markers in glioma tissues, and could be induced by hypoxia and p38 activation. DCHS1 (dachsous cadherin-related 1) may belong to some canonical cancer molecular pathways in gliomas and has been selected as a marker gene in glioma prognostic signature . Additionally, TGFB2 (Transforming Growth Factor Beta 2) has been identified as a predictor of poor treatment outcomes in pediatric diffuse intrinsic pontine glioma . While early-phase clinical trials can commence solely based on data obtained from animal studies, they are often conducted with constrained sample sizes and limited statistical power, making them easy to produce false-negative results, especially if the wrong class of patients is recruited for the study. Also, findings from animal studies often fail to be translated to human patients due to heterogeneity in human patients and other reasons. Our study indicates the plausibility of the molecular mechanism of the combination therapy to be successfully translated to human LGG patients, which is valuable information for designing targeted clinical trials for gliomas. Several limitations are associated with the utilization of publicly available data in this study. First, the sample sizes for mouse data and irradiated LGG patients are considerably smaller than what might be optimal, even for preliminary analysis aimed at designing new studies and clinical trials. Second, this study was conducted in human patients using gene expression data from bulk tissue instead of single-cell gene expression. Once the single-cell gene expression data of human is available, one can investigate and translate the radiation resistance mechanism from mice to human with respect to a specific cell type, e.g., specific types of macrophages, which might further improve the efficiency of our method and the interpretability of the findings . Thirdly, our study focuses on TAMs, which did not involve their communications with the TCs. The inter-cellular communication plays a substantial role in promoting the progression of low-grade gliomas (LGG) . Fourthly, the TCGA and CGGA clinical datasets were collected many years ago and used a WHO grade classification system, which is outdated and not necessarily identical to the most recent WHO grade definitions. There are also slightly different definitions of low versus high-grade gliomas in the literature. The LGG gliomas we studied in this paper include WHO grade II/III gliomas, as described previously . Additionally, due to the high heterogeneity of GBM, the small overall sample size for GBM patients in TCGA, and very few subjects treated with radiation in the TCGA dataset, we did not build a predictive model for the GBM. We expect that the mechanisms of radiotherapy resistance may differ between LGGs and some grade IV gliomas (GBMs) since the GBMs are more heterogeneous than LGGs. Indeed, an existing translational study found that the drug-resistant signature identified in the GBM-proneural subtype also has good prognostic power in LGG. Importantly, our translational approach can be easily applied to GBMs. Future studies might focus on GBM when larger sample sizes are available. 4.1. Data Collection of Human Glioma Patients from Publicly Available Database Two independent and large human glioma cohorts, The Cancer Genome Atlas (TCGA) database ( https://cancergenome.nih.gov/ ) and Chinese Glioma Genome Atlas (CGGA) database ( http://www.cgga.org.cn/ ), were utilized for the translation, signature construction and prediction of the radiation-resistant signature in human. A set of 478 LGG subjects in TCGA and a set of 163 LGG subjects in CGGA, which have matched clinical information and gene expression profiles (RNA-seq data), were collected. Our objective is to identify key genes and molecular pathways underlying the resistance to radiotherapy and the observed efficacy of the combination therapies in preclinical studies and to translate the identified mechanism from mouse to human. The gene signature was constructed among the irradiated LGG patients who had been treated with radiotherapy in both databases (TCGA: n = 295; CGGA: n = 127). We also examined the selected genes’ performance in non-irradiated LGG patients (TCGA: n = 183; CGGA: n = 36). TCGA was used as the training set for signature identification, model construction, and performance evaluation, while CGGA was used as the testing set to validate the predictive performance. Moreover, the corresponding clinical characteristics, including age, sex, and grade of our cohorts, were provided in . 4.2. Differentially Expressed Gene (DEG) Analysis Using Preclinical Mouse Trials To gain insight into how CSF-1R inhibition improves survival in response to radiation resistance in mice, we started by identifying differentially expressed genes (DEGs) between the monotherapy (RT-only) and combination treatment groups with four mouse samples in each group. The identification was conducted separately in macrophages (MDM) and microglia (MG), both of which were tumor-associated macrophages (TAMs) with different developmental origins . DEGs between different treatment groups (monotherapy of radiation versus combination treatment of radiation plus CSF-1R inhibition) were identified using DESeq2 . The significance thresholds were set as p -value < 0.05 and log2FoldChange > 0.5 using the Wald test and mean fit type in DESeq2. Subsequently, we combined the DEGs in MDM and MG together and translated the DEGs identified from mouse samples into those in the human genome based on the orthology mapping package (Orthology.eg.db, version 3.18.0). Only those DEGs that have orthology in both human and mouse were used for further analysis. 4.3. Evaluation of Predictive Performance of the Identified Gene Signature The log-rank test and time-dependent ROC were used to evaluate the predictive performance of the identified gene signature. Using gene expression profile X , we first calculated risk scores f X = X ′ β ^ for LGG patients received radiotherapy in both training and validation datasets, in which β ^ denoted regression coefficients (log hazard ratio) derived from a multivariate Cox regression model in the training set. Then, glioma patients were classified into high-risk or low-risk groups by choosing the median of risk scores as a cutoff in each dataset, indicating poor or good prognoses, respectively. Kaplan–Meier (KM) curves were generated to summarize patient survival in distinct risk groups, and a log-rank test was conducted to assess whether survival curves for high-risk and low-risk groups were significantly different. Time-dependent ROC analysis was performed to evaluate the accuracy of the identified signature in predicting 2-year, 3-year, and 5-year survival rates. A larger value of the area under the ROC curve (AUC) indicates better predictive power of the gene signature. Two independent and large human glioma cohorts, The Cancer Genome Atlas (TCGA) database ( https://cancergenome.nih.gov/ ) and Chinese Glioma Genome Atlas (CGGA) database ( http://www.cgga.org.cn/ ), were utilized for the translation, signature construction and prediction of the radiation-resistant signature in human. A set of 478 LGG subjects in TCGA and a set of 163 LGG subjects in CGGA, which have matched clinical information and gene expression profiles (RNA-seq data), were collected. Our objective is to identify key genes and molecular pathways underlying the resistance to radiotherapy and the observed efficacy of the combination therapies in preclinical studies and to translate the identified mechanism from mouse to human. The gene signature was constructed among the irradiated LGG patients who had been treated with radiotherapy in both databases (TCGA: n = 295; CGGA: n = 127). We also examined the selected genes’ performance in non-irradiated LGG patients (TCGA: n = 183; CGGA: n = 36). TCGA was used as the training set for signature identification, model construction, and performance evaluation, while CGGA was used as the testing set to validate the predictive performance. Moreover, the corresponding clinical characteristics, including age, sex, and grade of our cohorts, were provided in . To gain insight into how CSF-1R inhibition improves survival in response to radiation resistance in mice, we started by identifying differentially expressed genes (DEGs) between the monotherapy (RT-only) and combination treatment groups with four mouse samples in each group. The identification was conducted separately in macrophages (MDM) and microglia (MG), both of which were tumor-associated macrophages (TAMs) with different developmental origins . DEGs between different treatment groups (monotherapy of radiation versus combination treatment of radiation plus CSF-1R inhibition) were identified using DESeq2 . The significance thresholds were set as p -value < 0.05 and log2FoldChange > 0.5 using the Wald test and mean fit type in DESeq2. Subsequently, we combined the DEGs in MDM and MG together and translated the DEGs identified from mouse samples into those in the human genome based on the orthology mapping package (Orthology.eg.db, version 3.18.0). Only those DEGs that have orthology in both human and mouse were used for further analysis. The log-rank test and time-dependent ROC were used to evaluate the predictive performance of the identified gene signature. Using gene expression profile X , we first calculated risk scores f X = X ′ β ^ for LGG patients received radiotherapy in both training and validation datasets, in which β ^ denoted regression coefficients (log hazard ratio) derived from a multivariate Cox regression model in the training set. Then, glioma patients were classified into high-risk or low-risk groups by choosing the median of risk scores as a cutoff in each dataset, indicating poor or good prognoses, respectively. Kaplan–Meier (KM) curves were generated to summarize patient survival in distinct risk groups, and a log-rank test was conducted to assess whether survival curves for high-risk and low-risk groups were significantly different. Time-dependent ROC analysis was performed to evaluate the accuracy of the identified signature in predicting 2-year, 3-year, and 5-year survival rates. A larger value of the area under the ROC curve (AUC) indicates better predictive power of the gene signature. |
Stress and value: the student perspective on utilizing real vs. actor patients in objective structured clinical examinations | 1371ca74-f671-41d9-8715-c31c147b65b0 | 11247776 | Pediatrics[mh] | Since Dr. Ronald Harden’s inception of the objective structured clinical examination (OSCE) in the 1970’s the OSCE has spread around the globe and cemented itself as an educational pillar spanning all levels of training and a wide range of health professions . Today, OSCE’s serve both summative and formative functions . They offer the ability to have nearly identical encounters which increased the possibility of reliable, valid and replicable assessments with definable goals for performance which allows for the use of OSCEs as a summative assessment of a trainee. This method reduces biases and variability associated with other assessment methods, and filling in the gaps missed by assessment methods focused primarily on knowledge . There is also value in OSCEs for formative purposes as they create a safe clinical learning experience with feedback, ideally immediate, given in a way that contributes to growth and preparation for future clinical encounters and exams. There is evidence that formative focused OSCE experiences may also improve performance on summative clinical assessments . However unfortunately the opportunity for formative feedback is missed as frequently this is not performed. OSCEs typically use standardized patients (SPs), which were first introduced in 1964, for clinical based encounters and provide similar clinical encounters for each student. 13 Studies have shown OSCEs to be the most reliable and accurate tool in assessing performance and clinical acumen compared to other methods including modules, simulations, and case discussions . The specific components and modifications of the OSCE vary between institutions, but the vast majority include SPs and/or computerized simulations. Significant research has been done to evaluate OSCE’s summative qualities, connecting OSCE scores to performance on other OSCEs, knowledge based exams, clinical evaluations and clerkship performance, national licensing examinations, and performance in residency . The OSCE has been implemented and studied in numerous formats, including at the international level in many countries for medical licensing processes . Additionally OSCE’s serve a purpose in meeting the societal expectations for the evaluative rigor of clinical training. As the OSCE gained importance in medical education, the use of OSCEs with direct “patient” contact in the field of Pediatrics lagged behind due to the innate difficulty of standardizing young children and infants as SPs. The challenges of having standardized actor pediatric patients range from performance fatigue requiring a large volume of pediatric SPs, standardization of these performances, and the potential psychological effects on the pediatric SPs, particularly in youngest patients. These patients are unlikely to completely understand what is occurring and this may disrupt the relationship with their own primary care provider . Although some groups have invested significant time and resources into developing pediatric actor SP OSCEs, these have not become widespread in medical education . These challenges have caused Pediatric Clerkships to avoid interaction with a standardized patient for an OSCE although there is more recent literature that has demonstrated feasibility . There are additional methods to assess clinical performance of trainees in clinical settings including the Clinical Evaluation Exercise (CEX) and mini-CEX. These have been studied for adult and pediatric populations and determined to varying degrees to be reliable and valid summative and formative assessment tools . The mini-CEX is now favored and deployed in many residency training programs as part of the method of assessing trainee competence and provide formative feedback for trainees. It is promoted by the American Board of Internal Medicine (ABIM), which encourages its use on every clinical rotation. It has been shown to have superior reproducibility to the traditional CEX and is more easily implementable due to the increased efficiency in assessment of a trainee. There are a few studies in the literature which show feasibility and positive impact on clinical skills in medical students . The Pediatric Clerkship at Loma Linda University School of Medicine (LLUSM) has a long history of utilizing a real, non-actor, non-standardized patient encounter as part of medical student training and evaluation. This study assessed perceived stress levels and the perceived educational value of the non-actor clinical examination exercise (CEX) compared to a traditional OSCE from the student perspective. We are not aware of any previous literature that has assessed the learner perception of value or stressfulness of the activity between a CEX or OSCE. It has been described in the literature that clinical experiences such as OSCEs are stressful for students; however, they do recognize their utility for formative and summative evaluations . We hypothesized that there would be no difference in stress or value between the two different summative evaluation encounters. A cross-sectional study was performed to evaluate medical student perceptions comparing the real patient CEX to the standardized patient actor OSCE. This study received Institutional Review Board (IRB) exemption given the minimal risk to participants and anonymous data collection. Students were asked about the perceived value of the CEX and OSCE for improving clinical skills, in addition to stress levels experienced during each exam. Eligible study participants consisted of all third-year medical students at LLUSM rotating through the required Pediatric and Internal Medicine (IM) clinical clerkships during the 2016–2017 academic year ( n = 165). Study participation was voluntary, and all participating students were verbally consented to complete anonymous questionnaires related to their perceptions of the clinical and learning assessment experiences for the purpose of medical education research. It was emphasized to students that there was no penalty for students who did not participate. Questionnaires were administered immediately after routine Pediatric CEX and IM simulated patient OSCEs and prior to receiving their grades for the OSCE or CEX. At LLUSM, third year students are assigned to rotation sequences for the required clerkships of Family Medicine, Obstetrics and Gynecology, Internal Medicine, Neurology, Psychiatry, Surgery, and Pediatrics. The sequence of clerkships follows the same progression, with the order of rotation determined by the initial clerkship. In this scheduling format, approximately 60% of participating students took the Pediatric CEX prior to the IM OSCE, and 40% of participating students vice-versa. On the Pediatric rotation the CEX took place during the outpatient portion of the clerkship and could be scheduled between weeks 2–7 of the 8 week clerkship. The IM OSCE took place in the last week of the 10 week clerkship. Students participate in OSCEs in the pre-clinical years and have OSCEs in some form on every other rotation throughout the year. The CEX was administered using a non-actor, non-standardized patient at the pediatric resident clinic, both at a hospital-based clinic in Loma Linda, California for one month of the study period and at a federally qualified health center in San Bernardino, California throughout the rest of the study period. The two different locations were due to a scheduled move of the pediatric resident clinic. There were 4 faculty involved in the direct observation of students and delivery of feedback. The students had regular interactions with the faculty administering the CEX as they were involved in delivering didactic sessions with the students. The CEX evaluator did not interject during the student’s portion of the encounter and allowed the student to complete their history, exam, assessment and plan with the patient and family. The faculty was ultimately responsible for the medical care of the patient and the student did have the opportunity to observe the interaction between the faculty and patient/family as part of their CEX. This clinic is also a resident run clinic and often the CEX encounters are actually shorter than the resident based encounters. Due to the focus on education both for medical students and residents the administration had no concerns about assessment methods within the clinic. Potential CEX patients were screened by pediatric faculty CEX evaluators to include well child appointments or simple acute visits without serious chronic ongoing medical problems such as: fever, rash, acute otitis media, upper respiratory infection symptoms, constipation, gastroenteritis, feeding issues, abdominal pain, or reflux symptoms. All CEX patients were 5 years of age or younger (included newborn visits) and had an English-speaking family. Participating caregivers were informed their child appeared to be appropriate for an assessment of a student and provided verbal informed consent to participate in the CEX. Medical students were provided the name, sex, and age of the patient less than 5 min prior to the encounter. They were not informed of the reason for the visit. Each student was allotted 30 min to complete the encounter, which included obtaining the chief complaint, history, physical exam, and formulating an assessment and plan with appropriate parental counseling. Each encounter was observed in the room by a member of a core group of Pediatric Clerkship faculty who received training on CEX encounter assessment. The IM OSCE was completed in the clinical skills center, which is utilized for other LLUSM OSCEs throughout all 4 years of medical training. Adult standardized patient actors were utilized who had received 2–3 h of OSCE training. These actors were obtained, recruited, and trained by the clinical skills center at LLUSM. Students completed two clinical encounters: one 15-minute encounter and a second 20-minute encounter during which a point-of-care ultrasound examination was performed. Both encounters required completion of a history, physical exam, assessment and plan, patient counseling, and the IM OSCE involved documentation of a note. Students were told only the patient chief complaints prior to the exam and were provided a prompt with information on the age, sex, vitals, and more detailed chief complaint with some context immediately prior to each encounter. Chief complaints for the IM simulated patient encounters were abdominal pain and shortness of breath. Internal Medicine OSCEs were observed directly through a two-way mirror, graded in-real-time, and were recorded for future student review. Four Internal Medicine faculty members evaluated the encounter using a standardized rubric, which evaluated dimensions of the history, appropriate systems for physical exam, differential diagnosis, appropriate work-up, and quality of note (including differential diagnosis support). After completing each encounter, students were given 10 min to write a clinical note documenting the history, physical exam, differential diagnosis with supporting evidence, and plan, which also contributed to the overall OSCE grade. The dimensions assessed in both the Pediatric CEX and Internal Medicine OSCE were similar with a rubric that was the basis for feedback. They included evaluation of the comprehensiveness of the history of present illness, physical exam, differential diagnosis/assessment, plan, and information sharing. Because the IM OSCE involved standardized patients with specific chief complaints, there were more specific history dimensions that were relevant to the chief complaint on which students were evaluated. After both the Pediatric CEX and Internal Medicine OSCE, an individual feedback session was provided by the grading faculty member. Following this feedback, students were offered the opportunity to complete the research questionnaire. There were four Internal Medicine faculty performing the CEX and OSCE and providing feedback in each clerkship. The CEX and OSCE were both moderately important as they were necessary to pass the clerkship and factored significantly into grades. The questionnaire consisted of 14 survey questions derived from the “Pressure/Tension” and “Value/Usefulness” statements of the Intrinsic Motivation Inventory, a validated tool used to assess feelings of stress and perceived value of an exercise, hereafter referred to simply as concepts of “stress” and “value.” . Each question was answered using a five-point Likert agreement scale rating perceptions related to concepts of “value” and “stress.” Subgroups of the 14 questions were analyzed according to 11 concept categories: “real life clinical scenario,” feelings of “nervousness,” “relaxed,” “pressured,” and “at ease,” “usefulness” and “importance” for improving clinical skills, accuracy for assessing clinical skills. Median scores and interquartile ranges were calculated for each survey item. Scores for each of the 14 questions and each of the concept categories were compared across the two groups. The Likert scale responses were numbered 1–5 and a Kruskal-Wallis and Mann Whitney U tests were used to test for differences between the various scores (reported as medians) in the CEX and OSCE groups. We looked for differences in distribution between the groups using Kolmogorov-Smirnoff. Statistics were performed with SPSS v 28 (SPSS, IBM). In addition to the Likert-scaled questions, students completed two free-response short answer prompt questions: (1) What factors contributed to the amount of nervousness or pressure you felt during this OSCE?; (2) What was the most helpful aspect of this OSCE? For the free response questions, answers were coded by two investigators (SK and YM), grouped by theme, and tallied according to frequency of theme occurrence. Out of 165 third year students who completed Pediatric and IM clerkships, 147 (89%) and 145 (88%) completed CEX and OSCE questionnaires, respectively. Scores for various questions assessing stress levels during the CEX and OSCE as well as their perceived educational value were compared across the two groups. Results showed a significant difference across the two groups in feeling pressure during the scenario with median scores lower in the CEX compared to the OSCE group (Median CEX Score 3, Median OSCE Score 4, p < 0.001) which is shown in both Table ; Fig. . Students also reported feeling more at ease during the CEX than during the OSCE (Median CEX Score 3, Mean CEX Score 3.16, Median OSCE Score 3; Mean OSCE Score 2.81; p 0.002). There was no significant difference between the two groups in feeling nervous (Median CEX Score 4; Median OSCE Score 4; p = 0.543) or relaxed (Median CEX Score 3; Median OSCE Score 3; p = 0.055) during the encounter. Median scores were calculated for how important and useful the encounters were at improving each skill set (history-taking, physical exam, and interpersonal communication) and for how accurately the encounters represented those skill sets. There was a significant difference in median scores showing that medical students found the CEX to be more useful (Median CEX Score 5 and Median OSCE Score 4, p < 0.0001) and important than the OSCE (Median CEX Score 5 Median OSCE Score 4; p < 0.0001) in improving their history-taking, physical exam, and their interpersonal skills. Additionally, scores also showed that students perceived the CEX more accurately represented their skills compared to the OSCE (Median CEX Score 4, Median OSCE Score 4; Mean CEX score 3.874; Mean OSCE Score 3.375, p < 0.0001). Overall, there was a significant difference between the two groups, with students showing a greater perception of improvement in, but also accurate representation of, their history-taking skills (Median CEX Score 4, Median OSCE Score 3, p < 0.001), physical exam skills (Median CEX Score 4, Mean CEX Score 3.91, Median OSCE Score 4, Mean OSCE Score 3.4, p < 0.001), and interpersonal skills (Median CEX Score 4, Mean CEX score 4.1, Median OSCE Score 4, Mean OSCE Score 3.59, p < 0.001). Finally, results also showed that students perceived the CEX to more accurately represent a real-life clinical scenario (Median CEX Score 5, Median OSCE Score 3, p < 0.0001) than the OSCE. (Table ) The results from the first free response questions are noted in Table . Factors contributing to the degree of nervousness or pressure during the encounters showed that having the attending physically in the room during the CEX was mentioned most (44 comments), followed by the knowledge that it was a graded encounter (40 comments), uncertainty about the patient’s history and chief complaint (29 comments), and simply being observed (24 comments). Students in the OSCE reported that the timed nature of the encounter most contributed to feeling pressured (62 comments), followed by knowledge of the graded encounter (45 comments), the feeling of “being watched” (21 comments), and concern and uncertainty over missing key details (17 comments). Our data show that the students rated their performance during the CEX as a more accurate representation of their day-to-day interactions with patients and more valuable to their growth as a physician. Students felt it was more useful and more important for the improvement of their history taking, physical exam and interpersonal skills than with a standardized patient. The data supported our hypothesis that the real patient encounter was viewed as more valuable than an OSCE encounter. The increase in recognized value could be related to a removal of the “theatrics” related to OSCEs. This has been described with OSCEs as an underlying theme of disingenuity and insincerity due to the theatric nature where students put on a performance they believe will move the audience (the evaluators) . Evaluating medical students during real-patient encounters, such as the CEX, may therefore reveal a more accurate window of true clinical competency with a reduction in theatrical performance. There were several distinct factors that we would have expected to increase stress with the CEX on the Pediatric Clerkship rotation. It was performed with the evaluating physician in the same room as the trainee, which was noted in the qualitative portion to be a very commonly mentioned theme around stress in the encounters. Additionally, for the vast majority of students, the CEX was performed in an unfamiliar clinical environment they had not encountered previously. Despite these factors, which may have increased stress or anxiety, the CEX was found to be no more stressful or anxiety-provoking than the standardized OSCE encounters. While our methods involved a full CEX with both formative and summative purposes, it is reasonable to extrapolate to a mini-CEX, which could be used for solely formative purposes and would likely decrease the level of stress or anxiety a trainee experiences compared to a standard CEX or OSCE. This would allow medical students to receive feedback in the moment on direct patient encounters, and faculty should be reassured that it may be viewed as extremely valuable and not any more stressful than a standardized OSCE encounter. This study was inherently limited by only having included a single institution and was an asymmetrical comparison of two separate clinical evaluations in two different clinical fields and patient populations: Internal Medicine and Pediatrics. Students were rating two different educational experiences, and it is possible they viewed the CEX as a complete unit decreasing their rating of the overall stress associated with the observed portion of the encounter. It is also possible that the nature of working with children resulted in a less stressful environment for students. Without the ability to control for other factors, such as the inherent differences in the clinical fields and patients, and clerkship experiences such as directors, teaching and evaluating faculty, residents, and clinical clerkship duties, there was clear risk for confounding bias. This study overall supports the usefulness of incorporating a real patient into the evaluation of medical students during their medical school clerkships. This is the first study to our knowledge comparing self-reported stress levels and perceived value of a real patient encounter as in the CEX to a standardized actor OSCE. Further research should be performed to evaluate the utility of this method in medical education and how it translates into actual learning. One of the primary questions relating to a CEX in any setting is the sustainability of the assessment. Ensuring that faculty are compensated for this time is critical to maintain their interest in contributing to students’ development. This assessment was incorporated into the time the clerkship physician leaders were allocated for clerkship responsibilities and supported by the Pediatric Department. With medical schools facing challenges in having directly observed encounters for students, supporting Pediatric clerkships to perform this and provide both formative and summative feedback experiences would be highly valuable . Our data show that students perceived their performance during the CEX as more accurately representative of their day-to-day interactions with patients. In addition, the real patient encounter was rated by medical students as more useful and more important for improving their history taking, physical exam, and interpersonal skills than evaluating a standardized patient. We believe that this provides clerkship directors with appropriate reasoning to incorporate a CEX into their evaluation of students on their clerkship. Table Median (interquartile range) Likert agreement scale scores for Intrinsic Motivation Inventory questions on students’ perceptions of “stress” and “value” in clinical examinations. Likert scale responses were as follows: 1-strongly disagree, 2-disagree, 3-neutral, 4-agree, 5-strongly agree. Scores from the Internal Medicine OSCE were compared to the Pediatric CEX using Kruskal-Wallis testing, with p ≤ 0.05 considered statistically significant. Table Themes extracted from two free response questions asked immediately after completing Internal Medicine OSCE and Pediatric CEX examinations. The frequency count of each theme is given in addition to the percentage out of total responses. The first question asked about factors contributing to feeling nervousness or pressure during the exam. The second question asked for perceptions on the most helpful aspect of the exam. Response themes were generally similar between Internal Medicine and Pediatric examinations. |
Benefit Design and Access to Dental Care Among Seniors With Medicare Advantage Dental Benefits | 259f90ad-8a6b-4506-8512-1bdb26957a16 | 11762240 | Dentistry[mh] | Oral health is a critical component of healthy aging. Yet many older US residents face affordability challenges when it comes to going to the dentist. In fact, cost barriers for dental care are more severe than for any other type of health care service. , Part of the reason is that traditional Medicare (TM) does not cover dental services except for patients in need of medically necessary procedures such as tooth extractions to treat mouth infections prior to cancer treatment. The landscape is more complex when we look at how Medicare Advantage (MA) plans address dental care. While MA enrollment has grown, with more than half of Medicare beneficiaries enrolling in MA as of 2023, the percentage of MA plans offering dental coverage has also increased. From 2020 to 2024, the percentage of MA plans offering coverage for preventive dental care services (eg, checkups, cleanings) increased from 75% to 90% while those offering comprehensive dental care services (eg, restorations, root canals) increased from 50% to 85% of plans. Racial and ethnic minority groups, along with those with low educational attainment and lower incomes, are especially likely to enroll in MA plans with dental benefits, suggesting that these benefits could play a role in reducing inequities in access to dental care services and, ultimately, oral health. , However, evidence suggests that having dental coverage via MA has little impact on dental outcomes. MA enrollees still have difficulty accessing dental care compared with other populations. Among US residents ages 65 years and older, 12.6% of enrollees with an MA dental benefit reported a cost barrier to dental care compared with 7.4% with non-MA private dental insurance. This suggests that MA dental coverage may not provide the same financial protection as private dental insurance. Another study also shows that enrollment in MA does not result in higher dental care use compared with TM once people become eligible for Medicare after age 65 years. Similarly, compared with TM enrollees, MA enrollees experience a larger decrease in dental spending after transitioning into Medicare from private dental insurance after retirement. Enrollees with MA dental benefits also experience substantial out-of-pocket costs for dental care, nearly equivalent to TM enrollees. These results may suggest that MA dental benefit design may be insufficient when it comes to reducing financial barriers to dental care among Medicare enrollees. In this study, we examine the characteristics of MA dental benefit plans an their association with unmet dental needs, financial barriers to dental care, and dental care use. Data Sources We examined a cross-section of beneficiaries with MA dental benefits from the 2019 Medicare Current Beneficiary Survey (MCBS). Data analysis was performed between May and August 2024. We followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. The University of Florida Institutional Review Board determined this study to be exempt, as it uses publicly available data sources without Health Insurance Portability and Accountability Act identifiers. The MCBS is a nationally representative survey of individuals enrolled in TM or MA. The survey uses a rotating cohort design that samples approximately 15 000 enrollees per year. Each year, the response rate to the baseline interview is about 60%; in subsequent interviews, the response rate exceeds 80%. We focused on the respondents who completed the cost supplement, from which measures of dental care utilization were derived. The MCBS documents monthly MA contract and plan identifiers for each beneficiary. We linked MA dental plan characteristics to each MCBS respondent using the respondent’s county of residence, MA contract number, and MA plan number. We extracted MA dental plan data from publicly available 2019 Plan Benefit Package files published by the Centers for Medicare & Medicaid Services (CMS) and released quarterly. The Plan Benefit Package files include information on plan coverage and supplemental benefits (eg, dental, vision, hearing) for all MA payers that submit a bid to CMS. These files include detailed information on the services covered under an MA dental plan (eg, restorations, radiographs, oral examinations), general plan characteristics (eg, health maintenance organization [HMO] vs preferred provider organization, prior authorization requirements, referral requirements), and benefit design attributes (eg, coinsurance/copayment levels, plan deductibles, annual plan benefit maximums). Sample We analyzed data from respondents who participated in the 2019 MCBS cost supplement (unweighted N = 8308; weighted N = 53 920 725). We further restricted our sample to respondents whose MA contract and plan identifiers were able to be matched to identical MA contract and identifiers in the Plan Benefit Package files. For each quarter of 2019, we restricted the sample to only MA plans with coverage for preventive or comprehensive dental services. After merging these data and removing MCBS respondents not enrolled in an MA dental plan for the full year, our sample resulted in 1949 unweighted and 11 111 081 weighted observations. We considered an MCBS respondent to be enrolled in an MA dental plan if that plan offered any preventive dental services (radiographs, oral examinations, prophylaxis, or fluoride treatment) or any comprehensive dental services (nonroutine services, diagnostics, restorative, endodontics, periodontics, prosthodontics, tooth extractions, or oral and maxillofacial surgery). After removing observations with missing gender, race, income, educational attainment, rural status, self-reported health status, or job status, our final analytic sample contained 1789 unweighted or 10 425 596 weighted observations . The factors are used as covariates in the regression analysis. Dependent Variables Our outcomes were 3 reported measures of dental care access: (1) whether the survey respondent visited a dentist in the past year; (2) whether the survey respondent reported an unmet dental need in the past year; and (3) whether the survey respondent reported an unmet dental need due to cost in the past year. Unmet dental need due to cost is subset of overall unmet dental need. We examined these 3 outcomes as dependent variables in our analysis. Plan Characteristic Covariates In our analysis, we examined several dental plan characteristics and benefit design covariates. The plan characteristics in our regression models included whether it was an HMO dental plan, whether the dental plan covered at least two dental cleanings in a year, whether the dental plan required prior authorization for any dental services, whether the plan required a referral, and whether all typical dental services were covered by the plan (radiographs, examinations, dental cleanings, diagnostics, restorations, endodontics, periodontics, prosthodontics, tooth extractions, and oral and maxillofacial surgery). Benefit design covariates in our regression models included whether the respondent was enrolled in an MA dental plan mandating out-of-pocket (OOP) payments for preventive services (either coinsurance or copayments), a categorical variable classifying OOP costs for comprehensive dental services (no OOP, positive copayment, coinsurance less than 50%, coinsurance greater than or equal to 50%, or enrolled in a preventive-only plan) and a categorical variable for annual plan benefit maximum (less than $500, between $501 and $1500, between $1501 and $2000, between $2001 and $2500, greater than $2500, or no annual plan benefit maximum). Statistical Analysis We estimated probit regression models controlling for individual and county-level covariates to assess the association between MA dental plan attributes and dental care utilization, unmet dental need, and unmet dental need due to cost. As individual-level covariates, we included sex (male or female), race and ethnicity (Asian, Black, Hispanic, White, or other race [no further information is available]), household income as a percentage of the federal poverty level (FPL) (less than or equal to 100% of FPL, between 101% and less than or equal to 135% of FPL, between 136% and less than or equal to 200% of FPL, and more than 200% of FPL), age (65-74, 75-84, ≥85 years), educational attainment (less than high school, high school, some college, college, postgraduate), census region (New England/Middle-Atlantic, East North Central, West North Central, South Atlantic, East South Central, West South Central, Mountain, Pacific), Medicare-Medicaid dual eligibility status, job status (a binary variable for whether the respondent or spouse was working), self-reported health (good/excellent vs poor/fair health), and rural/urban residence determined by Rural-Urban Commuting Area Codes. County-level covariates included November 2019 MA penetration by quantile (percentage of Medicare-eligible individuals enrolled in MA), the log of 2018 median annual household income, percentage in poverty in 2018, population density based on the 2010 census, and the number of dentists per 100 000 population. Apart from MA penetration, other county-level covariates were extracted from the Area Health Resource File. For ease of interpretability, after probit estimation, we calculated marginal effect estimates to express differences in terms of percentage points. We performed all analyses in Stata SE, version 18.0 (StataCorp LLC). In all analyses, we applied survey weights from the 2019 MCBS cost supplement and accounted for the MCBS complex survey design. Our threshold for statistical significance was set at P < .05 and all hypothesis tests were 2-sided. We examined a cross-section of beneficiaries with MA dental benefits from the 2019 Medicare Current Beneficiary Survey (MCBS). Data analysis was performed between May and August 2024. We followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline. The University of Florida Institutional Review Board determined this study to be exempt, as it uses publicly available data sources without Health Insurance Portability and Accountability Act identifiers. The MCBS is a nationally representative survey of individuals enrolled in TM or MA. The survey uses a rotating cohort design that samples approximately 15 000 enrollees per year. Each year, the response rate to the baseline interview is about 60%; in subsequent interviews, the response rate exceeds 80%. We focused on the respondents who completed the cost supplement, from which measures of dental care utilization were derived. The MCBS documents monthly MA contract and plan identifiers for each beneficiary. We linked MA dental plan characteristics to each MCBS respondent using the respondent’s county of residence, MA contract number, and MA plan number. We extracted MA dental plan data from publicly available 2019 Plan Benefit Package files published by the Centers for Medicare & Medicaid Services (CMS) and released quarterly. The Plan Benefit Package files include information on plan coverage and supplemental benefits (eg, dental, vision, hearing) for all MA payers that submit a bid to CMS. These files include detailed information on the services covered under an MA dental plan (eg, restorations, radiographs, oral examinations), general plan characteristics (eg, health maintenance organization [HMO] vs preferred provider organization, prior authorization requirements, referral requirements), and benefit design attributes (eg, coinsurance/copayment levels, plan deductibles, annual plan benefit maximums). We analyzed data from respondents who participated in the 2019 MCBS cost supplement (unweighted N = 8308; weighted N = 53 920 725). We further restricted our sample to respondents whose MA contract and plan identifiers were able to be matched to identical MA contract and identifiers in the Plan Benefit Package files. For each quarter of 2019, we restricted the sample to only MA plans with coverage for preventive or comprehensive dental services. After merging these data and removing MCBS respondents not enrolled in an MA dental plan for the full year, our sample resulted in 1949 unweighted and 11 111 081 weighted observations. We considered an MCBS respondent to be enrolled in an MA dental plan if that plan offered any preventive dental services (radiographs, oral examinations, prophylaxis, or fluoride treatment) or any comprehensive dental services (nonroutine services, diagnostics, restorative, endodontics, periodontics, prosthodontics, tooth extractions, or oral and maxillofacial surgery). After removing observations with missing gender, race, income, educational attainment, rural status, self-reported health status, or job status, our final analytic sample contained 1789 unweighted or 10 425 596 weighted observations . The factors are used as covariates in the regression analysis. Our outcomes were 3 reported measures of dental care access: (1) whether the survey respondent visited a dentist in the past year; (2) whether the survey respondent reported an unmet dental need in the past year; and (3) whether the survey respondent reported an unmet dental need due to cost in the past year. Unmet dental need due to cost is subset of overall unmet dental need. We examined these 3 outcomes as dependent variables in our analysis. In our analysis, we examined several dental plan characteristics and benefit design covariates. The plan characteristics in our regression models included whether it was an HMO dental plan, whether the dental plan covered at least two dental cleanings in a year, whether the dental plan required prior authorization for any dental services, whether the plan required a referral, and whether all typical dental services were covered by the plan (radiographs, examinations, dental cleanings, diagnostics, restorations, endodontics, periodontics, prosthodontics, tooth extractions, and oral and maxillofacial surgery). Benefit design covariates in our regression models included whether the respondent was enrolled in an MA dental plan mandating out-of-pocket (OOP) payments for preventive services (either coinsurance or copayments), a categorical variable classifying OOP costs for comprehensive dental services (no OOP, positive copayment, coinsurance less than 50%, coinsurance greater than or equal to 50%, or enrolled in a preventive-only plan) and a categorical variable for annual plan benefit maximum (less than $500, between $501 and $1500, between $1501 and $2000, between $2001 and $2500, greater than $2500, or no annual plan benefit maximum). We estimated probit regression models controlling for individual and county-level covariates to assess the association between MA dental plan attributes and dental care utilization, unmet dental need, and unmet dental need due to cost. As individual-level covariates, we included sex (male or female), race and ethnicity (Asian, Black, Hispanic, White, or other race [no further information is available]), household income as a percentage of the federal poverty level (FPL) (less than or equal to 100% of FPL, between 101% and less than or equal to 135% of FPL, between 136% and less than or equal to 200% of FPL, and more than 200% of FPL), age (65-74, 75-84, ≥85 years), educational attainment (less than high school, high school, some college, college, postgraduate), census region (New England/Middle-Atlantic, East North Central, West North Central, South Atlantic, East South Central, West South Central, Mountain, Pacific), Medicare-Medicaid dual eligibility status, job status (a binary variable for whether the respondent or spouse was working), self-reported health (good/excellent vs poor/fair health), and rural/urban residence determined by Rural-Urban Commuting Area Codes. County-level covariates included November 2019 MA penetration by quantile (percentage of Medicare-eligible individuals enrolled in MA), the log of 2018 median annual household income, percentage in poverty in 2018, population density based on the 2010 census, and the number of dentists per 100 000 population. Apart from MA penetration, other county-level covariates were extracted from the Area Health Resource File. For ease of interpretability, after probit estimation, we calculated marginal effect estimates to express differences in terms of percentage points. We performed all analyses in Stata SE, version 18.0 (StataCorp LLC). In all analyses, we applied survey weights from the 2019 MCBS cost supplement and accounted for the MCBS complex survey design. Our threshold for statistical significance was set at P < .05 and all hypothesis tests were 2-sided. Comparison of Dental Care Access by Insurance Status There were 1789 MA enrollees with 12 months of dental benefits. Respondents enrolled in an MA dental plan have lower dental care access compared with those enrolled in an MA plan without a dental benefit or those enrolled in traditional Medicare (eTable 1 in ). Compared with TM enrollees, MA enrollees with a dental benefit had higher rates of unmet dental need (12.5% [95% CI, 10.5%-14.4%] for MA enrollees vs 8.1% [95% CI, 6.9%-9.3%] for TM enrollees) and lower rates of dental care use (47.0% [95% CI, 44.4%-49.6%] for MA enrollees vs 59.2% [57.4%-61.0%] for TM enrollees). Sample Characteristics Of 1789 enrollees, respondents had a mean (SD) age of 74.7 (7.4) years, were 58.4% female, and 13.2% lived in a rural county. Beneficiaries identified themselves as Asian (1.9%), Black (15.6%), Hispanic (13.4%), White (67.8%), or another race (1.3%), and 20.1% of MA dental beneficiaries had a household income at or below the FPL. Twenty-two percent of MA dental beneficiaries had less than a high school education, 29.4% had a high school degree, 27.9% had some college, 12.5% had a college degree, and 8.4% had a postgraduate degree. In our analytic sample, 25.5% of beneficiaries were dual eligible and 80.1% reported good or excellent general health (eTable 2 in ). In our sample , 12.7% of respondents reported an unmet dental need, 9.5% reported an unmet dental need due to cost, and 49.2% visited a dentist in the year. Among MA beneficiaries with a dental benefit, 70.8% enrolled in an HMO dental plan, 90.3% were in plans that offered at least two dental cleanings per year, 75.0% had OOP costs for preventive services, 27.5% were in plans that required a referral, 59.2% were in plans that required a prior authorization, and 29.8% were in plans that offered a full spectrum of dental services. Regarding OOP costs for comprehensive dental services, 34.1% were in plans with no OOP costs, 11% of enrollees were in plans that required a greater than $0 copayment, 4.5% were in plans with coinsurance between 0% and 50%, 24.9% were in a plan with at least 50% coinsurance, 25.6% were in a dental plan that did not cover comprehensive services. Regarding annual plan maximums, 9.9% of enrollees were in plans with between $0 and $500 maximums, 24.6% were in plans with between $501 and $1500 maximums, 14.8% were in plans with between $1501 and $2000 maximums, 4.8% were in plans with between $2001 and $2500 maximums, 3.0% were in dental plan with greater than $2500 maximum, and 42.9% faced no benefit maximum. Multivariable Models MA beneficiaries enrolled in an HMO dental plan were 7.0 percentage points (95% CI, 3.2-10.9 percentage points; P < .001) more likely to report an unmet dental need compared with individuals not in HMO dental plan. The percentage point difference for individuals enrolled in plans that do not impose OOP payments on preventive services compared with individuals enrolled in plans that do not impose OOP payments on preventive services was 3.9 percentage points (95% CI, −0.5 to 8.3 percentage points; P = .08), although the finding was not statistically significant. Prior authorization requirements were also associated with unmet dental need (4.5 percentage points [95% CI, 0.3-8.7 percentage points]; P = .03). Compared with plans with no OOP costs for comprehensive services, plans not covering those services were associated with unmet dental need (12.1 percentage points [95% CI, 3.2-21.0 percentage points]; P = .008). Otherwise, at different positive levels of coinsurance or copayment levels for comprehensive services, there was no substantial variation in reported unmet dental need among MA dental plan enrollees. Relative to plans that imposed up to a $500 annual plan maximum, MA enrollees in plans with no annual maximum reported lower rates of unmet dental need (−12.4 percentage points [95% CI, −20.9 to −3.8 percentage points]; P = .004) . The estimated probability of an MA enrollee with dental benefits reporting an unmet dental need increased as annual plan maximums decreased ( A). HMO dental plan enrollment was associated with greater unmet dental need due to cost (4.4 percentage points [95% CI, 0.9-7.8 percentage points]; P = .01) compared with enrollment in non-HMO plans. Compared with plans that did not have an OOP cost for comprehensive services, plans not covering those services were associated with unmet dental need due to cost (7.8 percentage points [95% CI, 0.6-15.0 percentage points]; P = .03). There was no substantial variation in reported unmet dental need due to cost among MA dental plan enrollees at different positive levels of coinsurance or copayment thresholds for comprehensive services. Relative to individuals in plans that imposed up to a $500 annual plan maximum, MA enrollees in plans with greater than $2500 annual plan maximum (−11.7 percentage points [95% CI, −20.9 to −2.4 percentage points]; P = .013) or no annual maximum (−11.4 percentage points [95% CI, −19.5 to −3.3 percentage points]; P = .006) reported lower rates of unmet dental need due to cost . The estimated probability of an MA enrollee with dental benefits reporting an unmet dental need cost increased as annual plan maximums decreased ( B). Relative to individuals enrolled in plans that did not have an OOP cost for comprehensive services, individuals enrolled in plans with a copay for comprehensive services had lower dental care use (−14.6 percentage points [95% CI, −26.2 to −3.0 percentage points]; P = .01). Relative to individuals in plans that imposed up to a $500 annual plan maximum, MA enrollees in dental plans that imposed between a $500 and $1500 annual maximum (11.1 percentage points [95% CI, −0.2 to 22.4 percentage points]; P = .06), between a $2000 and $2500 annual maximum (16.2 percentage points [95% CI, 1.5-30.9 percentage points]; P = .031),greater than $2500 annual plan maximum (21.6 percentage points [95% CI, 6.0-37.3 percentage points]; P = .007), or no annual maximum (12.4 percentage points [95% CI, 1.2-23.6 percentage points]; P = .03) had higher rates of dental care use . The estimated probability of an MA enrollee with dental benefits reporting a dental visit increased as annual maximums increased ( C). Sensitivity Analyses We performed a number of sensitivity analyses. First, we restricted our sample to MA enrollees enrolled in mandatory dental plans (ie, plans that do not require additional fees for enrollment) (eTable 3 in ). Overall, the estimated marginal effect sizes for plan characteristics, OOP costs, and plan maximums were qualitatively similar to our main specification. In a second sensitivity analysis, we removed MA dental plan enrollees who switched between plans during the 12-month period. The estimated marginal effect sizes from this sensitivity analysis were also qualitatively similar to our main specification (eTable 4 in ). In a final sensitivity analysis, we also removed individual and county-level covariates from our regression models. Overall, the results are qualitatively similar to our main specification (eTable 5 in ). There were 1789 MA enrollees with 12 months of dental benefits. Respondents enrolled in an MA dental plan have lower dental care access compared with those enrolled in an MA plan without a dental benefit or those enrolled in traditional Medicare (eTable 1 in ). Compared with TM enrollees, MA enrollees with a dental benefit had higher rates of unmet dental need (12.5% [95% CI, 10.5%-14.4%] for MA enrollees vs 8.1% [95% CI, 6.9%-9.3%] for TM enrollees) and lower rates of dental care use (47.0% [95% CI, 44.4%-49.6%] for MA enrollees vs 59.2% [57.4%-61.0%] for TM enrollees). Of 1789 enrollees, respondents had a mean (SD) age of 74.7 (7.4) years, were 58.4% female, and 13.2% lived in a rural county. Beneficiaries identified themselves as Asian (1.9%), Black (15.6%), Hispanic (13.4%), White (67.8%), or another race (1.3%), and 20.1% of MA dental beneficiaries had a household income at or below the FPL. Twenty-two percent of MA dental beneficiaries had less than a high school education, 29.4% had a high school degree, 27.9% had some college, 12.5% had a college degree, and 8.4% had a postgraduate degree. In our analytic sample, 25.5% of beneficiaries were dual eligible and 80.1% reported good or excellent general health (eTable 2 in ). In our sample , 12.7% of respondents reported an unmet dental need, 9.5% reported an unmet dental need due to cost, and 49.2% visited a dentist in the year. Among MA beneficiaries with a dental benefit, 70.8% enrolled in an HMO dental plan, 90.3% were in plans that offered at least two dental cleanings per year, 75.0% had OOP costs for preventive services, 27.5% were in plans that required a referral, 59.2% were in plans that required a prior authorization, and 29.8% were in plans that offered a full spectrum of dental services. Regarding OOP costs for comprehensive dental services, 34.1% were in plans with no OOP costs, 11% of enrollees were in plans that required a greater than $0 copayment, 4.5% were in plans with coinsurance between 0% and 50%, 24.9% were in a plan with at least 50% coinsurance, 25.6% were in a dental plan that did not cover comprehensive services. Regarding annual plan maximums, 9.9% of enrollees were in plans with between $0 and $500 maximums, 24.6% were in plans with between $501 and $1500 maximums, 14.8% were in plans with between $1501 and $2000 maximums, 4.8% were in plans with between $2001 and $2500 maximums, 3.0% were in dental plan with greater than $2500 maximum, and 42.9% faced no benefit maximum. MA beneficiaries enrolled in an HMO dental plan were 7.0 percentage points (95% CI, 3.2-10.9 percentage points; P < .001) more likely to report an unmet dental need compared with individuals not in HMO dental plan. The percentage point difference for individuals enrolled in plans that do not impose OOP payments on preventive services compared with individuals enrolled in plans that do not impose OOP payments on preventive services was 3.9 percentage points (95% CI, −0.5 to 8.3 percentage points; P = .08), although the finding was not statistically significant. Prior authorization requirements were also associated with unmet dental need (4.5 percentage points [95% CI, 0.3-8.7 percentage points]; P = .03). Compared with plans with no OOP costs for comprehensive services, plans not covering those services were associated with unmet dental need (12.1 percentage points [95% CI, 3.2-21.0 percentage points]; P = .008). Otherwise, at different positive levels of coinsurance or copayment levels for comprehensive services, there was no substantial variation in reported unmet dental need among MA dental plan enrollees. Relative to plans that imposed up to a $500 annual plan maximum, MA enrollees in plans with no annual maximum reported lower rates of unmet dental need (−12.4 percentage points [95% CI, −20.9 to −3.8 percentage points]; P = .004) . The estimated probability of an MA enrollee with dental benefits reporting an unmet dental need increased as annual plan maximums decreased ( A). HMO dental plan enrollment was associated with greater unmet dental need due to cost (4.4 percentage points [95% CI, 0.9-7.8 percentage points]; P = .01) compared with enrollment in non-HMO plans. Compared with plans that did not have an OOP cost for comprehensive services, plans not covering those services were associated with unmet dental need due to cost (7.8 percentage points [95% CI, 0.6-15.0 percentage points]; P = .03). There was no substantial variation in reported unmet dental need due to cost among MA dental plan enrollees at different positive levels of coinsurance or copayment thresholds for comprehensive services. Relative to individuals in plans that imposed up to a $500 annual plan maximum, MA enrollees in plans with greater than $2500 annual plan maximum (−11.7 percentage points [95% CI, −20.9 to −2.4 percentage points]; P = .013) or no annual maximum (−11.4 percentage points [95% CI, −19.5 to −3.3 percentage points]; P = .006) reported lower rates of unmet dental need due to cost . The estimated probability of an MA enrollee with dental benefits reporting an unmet dental need cost increased as annual plan maximums decreased ( B). Relative to individuals enrolled in plans that did not have an OOP cost for comprehensive services, individuals enrolled in plans with a copay for comprehensive services had lower dental care use (−14.6 percentage points [95% CI, −26.2 to −3.0 percentage points]; P = .01). Relative to individuals in plans that imposed up to a $500 annual plan maximum, MA enrollees in dental plans that imposed between a $500 and $1500 annual maximum (11.1 percentage points [95% CI, −0.2 to 22.4 percentage points]; P = .06), between a $2000 and $2500 annual maximum (16.2 percentage points [95% CI, 1.5-30.9 percentage points]; P = .031),greater than $2500 annual plan maximum (21.6 percentage points [95% CI, 6.0-37.3 percentage points]; P = .007), or no annual maximum (12.4 percentage points [95% CI, 1.2-23.6 percentage points]; P = .03) had higher rates of dental care use . The estimated probability of an MA enrollee with dental benefits reporting a dental visit increased as annual maximums increased ( C). We performed a number of sensitivity analyses. First, we restricted our sample to MA enrollees enrolled in mandatory dental plans (ie, plans that do not require additional fees for enrollment) (eTable 3 in ). Overall, the estimated marginal effect sizes for plan characteristics, OOP costs, and plan maximums were qualitatively similar to our main specification. In a second sensitivity analysis, we removed MA dental plan enrollees who switched between plans during the 12-month period. The estimated marginal effect sizes from this sensitivity analysis were also qualitatively similar to our main specification (eTable 4 in ). In a final sensitivity analysis, we also removed individual and county-level covariates from our regression models. Overall, the results are qualitatively similar to our main specification (eTable 5 in ). To our knowledge, our study is the first to examine the association between dental plan attributes and dental care use among individuals enrolled in MA dental plans and their likelihood to report unmet dental needs and financial barriers to dental care. HMO dental plans were associated with higher rates of unmet dental need and unmet dental need due to cost. This may be due to HMO plans limiting care provision networks, although we were not able to explore this in our data. Future research should examine the breadth of dental care provision networks in MA dental plans and how this could affect access to dental care. We found that imposing OOP payments for preventive services and prior authorization requirements were associated with more unmet dental care needs. The resulting unmet care needs are consistent with broader findings that show MA plans use prior authorization to limit utilization and cost, which could limit access to care. Our findings may also lend support to the existing literature that shows MA dental benefits do little to enhance access to dental care. , , Enrollees in MA plans that did not cover comprehensive dental services had higher rates of unmet dental need and cost barriers, suggesting that preventive-only dental plans could hinder access to dental care. However, among plans that mandated OOP costs for comprehensive services, there was little variation in reported unmet dental need or unmet dental need due to cost at different levels of cost sharing. The finding with regard to coinsurance is inconsistent with broader research that finds a negative association of coinsurance rates with the likelihood of visiting a dentist. It is possible that plans with limited or no coinsurance control costs in ways that we were unable to observe in our data, such as using narrow care provision networks, for which we do not have data. As annual plan benefit maximums increased, we found that the likelihood of reporting unmet dental need was lower and visiting a dentist was higher. Further, our findings suggest that an annual plan benefit maximum of at least $2500 was associated with an increase in access to dental care. Limitations Our study has several limitations. First, although we controlled for several individual and county-level characteristics, the observational study type does not allow causal links to be made between MA dental plan benefit design characteristics and measures of dental care access. Second, there were several potential confounders that we were unable to include in our regression models, such as measures of network adequacy, which are likely to be associated with benefit design attributes and measures of dental care access. Third, while most MA plans are managed by private dental insurers, our findings only applied to the MA population and were not generalizable to employer-sponsored dental benefits or the individual dental insurance market. Fourth, due to endogeneity from plan choice arising from enrollees with greater dental needs choosing more comprehensive plans, our estimates could overestimate the effect of plan generosity on dental care access. Lastly, we believe the CMS data, for some MA dental plans, do not effectively distinguish copays from coinsurance (eg, some MA dental plans have copayments for comprehensive services greater than $100, which is unusual). Due to this potential measurement error, the fact that our coinsurance regressor was not significant while the copay regressor was significant in our dental care use model should be interpreted with caution. Namely, it should not be grounds for concluding that copays matter but coinsurance does not. Our study has several limitations. First, although we controlled for several individual and county-level characteristics, the observational study type does not allow causal links to be made between MA dental plan benefit design characteristics and measures of dental care access. Second, there were several potential confounders that we were unable to include in our regression models, such as measures of network adequacy, which are likely to be associated with benefit design attributes and measures of dental care access. Third, while most MA plans are managed by private dental insurers, our findings only applied to the MA population and were not generalizable to employer-sponsored dental benefits or the individual dental insurance market. Fourth, due to endogeneity from plan choice arising from enrollees with greater dental needs choosing more comprehensive plans, our estimates could overestimate the effect of plan generosity on dental care access. Lastly, we believe the CMS data, for some MA dental plans, do not effectively distinguish copays from coinsurance (eg, some MA dental plans have copayments for comprehensive services greater than $100, which is unusual). Due to this potential measurement error, the fact that our coinsurance regressor was not significant while the copay regressor was significant in our dental care use model should be interpreted with caution. Namely, it should not be grounds for concluding that copays matter but coinsurance does not. Results of this study suggest that dental plan offerings that attempt to limit resource utilization through HMOs or prior authorization requirements or that cover only preventive dental services are associated with barriers to dental care for MA enrollees. Annual plan benefit maximums below $2500 were associated with higher rates of unmet dental need and lower dental care utilization. These findings suggest that MA dental plans could be regulated to improve access to care among beneficiaries. |
Medical student learning on a distributed training platform in rural district hospitals | 9f1dc876-8319-4a67-8cfd-dc3483ec1b3b | 11369550 | Family Medicine[mh] | For several years, there has been an increasing trend of decentralising clinical training of medical students from tertiary health care centres, to urban, peri-urban and rural sites. , This shift has been influenced by the increasing number of medical students at South African medical schools who cannot be accommodated in tertiary institutions, the need to make the curriculum more relevant to the needs of the country and a recognition of the academic value of students learning in more rural platforms, where they get exposure to generalist care of patients presenting with undifferentiated problems as well as an understanding of the patient context. , , This process of decentralisation of training is not simple as there are multiple factors that need to be considered. Fortunately, a framework for distributed health professionals training was developed in 2015, which helps to guide the implementation of this training. Although decentralised training is often used to describe training outside of tertiary academic complexes, ‘distributed platform’ as used in this article is seen as a more open, non-hierarchical term. Distributed training has been described as ‘training activities for undergraduate students that takes place away from tertiary academic complexes’. Distributed placements are excellent in preparing students for future practice as they develop self-confidence and competencies as they build on preexisting knowledge and skills gained in earlier years. This self-confidence and competency are achieved through the academic concept of experiential learning, which is the ability to construct knowledge and meaning from real-life experiences encountered in daily practice at distributed sites. The experiential learning theories provide explanations for how individuals learn in unique ways as they react to their perceptions of experience. Through these experiences, medical students are stimulated to consider how they can be a self-regulated learner, a concept defined as a process that helps guide an individual’s goal-directed activities by controlling and managing their cognition, affect and behaviour. It is critical that they have this understanding as upon graduating, they will be expected to function with some level of independence although (initially) still under supervision, and provide optimal health care, while able to identify and willingly engage in ongoing professional development activities that serve to maintain their learning and competence. , For students to develop academically as they engage with experiential learning, they must take active responsibility for their learning. This means that they must have a sense of agency for self-directed learning, which ideally should have already been developed prior to entering clinical rotations. The centrality of responsibility underpins experiential learning, but requires maturity and deep reflection from the learner on their prior experiences in order to fully engage in the present learning opportunities. This ensures transformation of the learner as they interact with authentic ‘real’ scenarios. Educational setting/learning environment The Family Medicine department at the University of KwaZulu Natal (UKZN) believes that exposing medical students to a distributed rural learning environment is critical, as: (1) the rural context provides different but important learning opportunities to those provided in urban teaching hospitals and (2) it is a critical area in which students (and graduates) need to learn to work in. Evidence from around the world suggests that such exposure can influence where graduates decide to practice when their training is complete. It is important to note that approximately half of the global population live in rural areas and are served by less than a quarter of the world’s medical doctors, with sub-Saharan Africa served by only 4% of the global health workforce. To address these challenges, there is an urgent need to change selection of students, training content and context and continual support of medical doctors, which in turn would hopefully improve retention of the personnel once placed in such marginalised communities. The Family Medicine rural block is an attempt to expand the context of where training occurs. The Family Medicine Integrated Primary Care 3 (IPC3) module at UKZN is one of six final year modules that the 6th year medical students have to complete in order to attain their MBChB degree. This module is designed around the CanMEDS framework of core competencies, with assignments organised thematically around key roles of a physician namely: communicator, collaborator, manager, health advocate, scholar and professional. The purpose of the rotation is for the students to experience and to practise primary care medicine that is responsive to patients, their families and the community within the context of a district health system. This module builds on skills and experiences from other family medicine modules done in preceding years, thus allowing them to practically implement their knowledge and skills in a context of a rural district health system, which exposes them to the undifferentiated patients, forcing them to understand where their patient comes from, why they are there, and what their expectations are. Part of the design of the final year Family Medicine module is the placement of students in environments which they are not familiar with educationally. These environments require agency as students have to ‘step up’ and become ‘doctors’ by taking responsibility for patient care, under the supervision of the medical officers. The students are supervised by medical officers, primary healthcare nurses and members of the interprofessional team during their rotation. These levels of supervision challenge the students’ thinking and norms, as they are used to being supervised and taught by specialists mainly in regional hospitals. Working and living within the same rural environment also allows the students to be exposed to the cultural context, which adds further dimensions to their learning that go beyond the basic clinical learning. Research suggests that for medical doctors to achieve positive patient outcomes through providing quality patient care, they need a supportive learning environment that fosters research and an evidence-based approach to their work. The rural district hospitals (DHs) where the final year medical students rotate were selected for student rotations as they provide quality care for the local population, are (reasonably) well-staffed, are supportive of the UKZN Family Medicine block and have staff who are willing to encourage student participation in the day-to-day activities of the hospital. The Department believes that these approaches create an ideal environment for learning. The student experience on the distributed rural platform has been framed within a particular context that includes: (1) leadership and governance which direct it, (2) the site which provides permission and opportunity to participate, (3) the community, (4) sufficient capacity of the students, (5) student willingness to engage and participate and (6) the structure of the block. The Family Medicine department has provided leadership and oversight for the distributed platform, selected sites which support students learning, which have adequate infrastructure, and staff who are willing to invest in the training of the next generation of health care professionals. Final-year medical students are oriented prior to their allocation to DHs, and regular visits to participating DHs by staff from the Family Medicine department at UKZN ensure that roles and expectations are shared with hospital staff. The design of the module with small numbers of students allocated to suitable sites, reflection on disorientating experiences, clear expectations of participation and engagement, provided the opportunity for transformational experiential learning. In order to fully understand the continuous impact of rural placement of students, and in order to ensure that their experiences remain positive, it is important to evaluate the programme, thus identifying areas needing improvement. The aim of the study was to understand student learning through the lens of experiential learning in a rural DH training platform for 6th year medical students who did the Family Medicine IPC3 block from 10th October 2022 to 25th November 2022. The Family Medicine department at the University of KwaZulu Natal (UKZN) believes that exposing medical students to a distributed rural learning environment is critical, as: (1) the rural context provides different but important learning opportunities to those provided in urban teaching hospitals and (2) it is a critical area in which students (and graduates) need to learn to work in. Evidence from around the world suggests that such exposure can influence where graduates decide to practice when their training is complete. It is important to note that approximately half of the global population live in rural areas and are served by less than a quarter of the world’s medical doctors, with sub-Saharan Africa served by only 4% of the global health workforce. To address these challenges, there is an urgent need to change selection of students, training content and context and continual support of medical doctors, which in turn would hopefully improve retention of the personnel once placed in such marginalised communities. The Family Medicine rural block is an attempt to expand the context of where training occurs. The Family Medicine Integrated Primary Care 3 (IPC3) module at UKZN is one of six final year modules that the 6th year medical students have to complete in order to attain their MBChB degree. This module is designed around the CanMEDS framework of core competencies, with assignments organised thematically around key roles of a physician namely: communicator, collaborator, manager, health advocate, scholar and professional. The purpose of the rotation is for the students to experience and to practise primary care medicine that is responsive to patients, their families and the community within the context of a district health system. This module builds on skills and experiences from other family medicine modules done in preceding years, thus allowing them to practically implement their knowledge and skills in a context of a rural district health system, which exposes them to the undifferentiated patients, forcing them to understand where their patient comes from, why they are there, and what their expectations are. Part of the design of the final year Family Medicine module is the placement of students in environments which they are not familiar with educationally. These environments require agency as students have to ‘step up’ and become ‘doctors’ by taking responsibility for patient care, under the supervision of the medical officers. The students are supervised by medical officers, primary healthcare nurses and members of the interprofessional team during their rotation. These levels of supervision challenge the students’ thinking and norms, as they are used to being supervised and taught by specialists mainly in regional hospitals. Working and living within the same rural environment also allows the students to be exposed to the cultural context, which adds further dimensions to their learning that go beyond the basic clinical learning. Research suggests that for medical doctors to achieve positive patient outcomes through providing quality patient care, they need a supportive learning environment that fosters research and an evidence-based approach to their work. The rural district hospitals (DHs) where the final year medical students rotate were selected for student rotations as they provide quality care for the local population, are (reasonably) well-staffed, are supportive of the UKZN Family Medicine block and have staff who are willing to encourage student participation in the day-to-day activities of the hospital. The Department believes that these approaches create an ideal environment for learning. The student experience on the distributed rural platform has been framed within a particular context that includes: (1) leadership and governance which direct it, (2) the site which provides permission and opportunity to participate, (3) the community, (4) sufficient capacity of the students, (5) student willingness to engage and participate and (6) the structure of the block. The Family Medicine department has provided leadership and oversight for the distributed platform, selected sites which support students learning, which have adequate infrastructure, and staff who are willing to invest in the training of the next generation of health care professionals. Final-year medical students are oriented prior to their allocation to DHs, and regular visits to participating DHs by staff from the Family Medicine department at UKZN ensure that roles and expectations are shared with hospital staff. The design of the module with small numbers of students allocated to suitable sites, reflection on disorientating experiences, clear expectations of participation and engagement, provided the opportunity for transformational experiential learning. In order to fully understand the continuous impact of rural placement of students, and in order to ensure that their experiences remain positive, it is important to evaluate the programme, thus identifying areas needing improvement. The aim of the study was to understand student learning through the lens of experiential learning in a rural DH training platform for 6th year medical students who did the Family Medicine IPC3 block from 10th October 2022 to 25th November 2022. Final year MBCHB students at UKZN are placed in groups of 2–6 students at one of 16 rural DHs in KwaZulu Natal (KZN) for their 7-week rural block. This qualitative study was done in November 2022 with final year students when they returned to UKZN after the completion of their Family Medicine rotation. Students were asked to participate in this study, and 24 students consented to participate in four semi-structured interviews (SSIs) and four Focus Group Discussions (FGDs) (4 × 5 students). The FGDs were held face-to-face and the SSIs were held online via Zoom at times that were mutually convenient. All of the SSIs and FGDs were facilitated by AR, a faculty member who has extensive experience in qualitative research and who was not involved in any student teaching in the final year Family Medicine module. Students were asked about their understanding of a learning environment, whether or not there was a learning environment at the DH where they were based (and to give examples), and how this impacted on their learning while based at the DH and how this environment compared to other experiences that they had had at other health institutions. Interviews lasted 45 min–1 h, and were recorded and transcribed verbatim. After repeated reading by all the authors, codes, categories and themes were identified from the data and are presented in the results section. Direct quotes are provided where appropriate, and all data reported have been anonymised. Ethical considerations Ethical approval to conduct this study was obtained from the University of KwaZulu-Natal, Biomedical Research Ethics Committee (No. BREC/00004935/2022) and gatekeeper permission was given by the Registrar of UKZN Dr K.E. Cleland. Ethical approval to conduct this study was obtained from the University of KwaZulu-Natal, Biomedical Research Ethics Committee (No. BREC/00004935/2022) and gatekeeper permission was given by the Registrar of UKZN Dr K.E. Cleland. Final year medical students were very positive about the rural rotation as, often for the first time in their medical training, they felt that they were becoming doctors as they participated in the day-to-day activities of the hospital. Despite challenges with Wi-Fi connectivity, shortages of consumables and a lack of infrastructure, students recognised that: ‘[ A ] learning environment is literally anywhere, because any situation can be an opportunity to learn.’ (SSI4, SK, 26/11) They also recognised some of the factors that contributed to the learning environment (at the DH) were: ‘[ S ]upport, opportunity, and responsibility.’ (SSI2, RH, 28/11) The themes that emerged were grouped broadly into taking responsibility for learning and students’ learning experiences, generalism and the reality of context, being part of the team – teaching in context, and managing the learning environment – the design of the module. Taking responsibility for learning and students’ learning experiences Learning is a dynamic process that needs active participation. The DHs provided the context but there was a need for students to actively participate in that process – to be willing to learn, to actively participate in the opportunities provided, and be willing to take responsibility for the work that was entrusted to them. In this context, the learning opportunities are: ‘[ D ]ependent on you and what you make of it.’ (FGD2, Student 2, 25/11) ‘… and that students needed to be willing to learn and seek information and ask, as someone who shows enthusiasm encourages the teacher.’ (FGD2, Student 2, 25/11) Although this responsibility was initially: ‘[ V ]ery daunting because you feel a little overwhelmed.’ (SSI2, RH, 28/11) A willingness to actively participate helped students realise they knew things and to develop confidence in their abilities: ‘[ I ]t was a little scary at first, but it gave me a little bit more confidence as well. I said, okay, if I’m the person that needs to be doing this, this is part of my job, I’m going to have to do this. And at that moment, I took ownership – it gave me that sense of responsibility now that I get to see the patients, but because they are my responsibility that I needed to also do a really thorough job.’ (SSI2, RH, 28/11) The taking of responsibility was an important trigger for learning as: ‘[ R ]esponsibility motivates a lot more, without the responsibility there may not be that eagerness or willingness. I think you don’t comprehend the seriousness especially when it comes to medicine, you’re dealing with somebody’s life. If there’s no responsibility, if whatever you do doesn’t matter are you going to take it seriously? So, in that sense it put a lot of it into context, what our actions produce consequences.’ (FGD2, Student 3, 25/11) ‘I think it’s important that we were given the opportunity to manage someone on our own so what our primary plan is what the patient is getting so if that was slightly off, the Doctor would come and adjust it, but it allows us to reflect as well. Okay, this is what the type of Doctor I’m gonna be, this is the management plan that was actually done for this patient, so in the future I will always remember okay this Doctor corrected me by adding this on, so in the future I will never forget that.’ (FGD3, Student 2, 25/11) The structure of their rotation meant that they had first contact with patients whom they had to assess and develop a management plan for, were given responsibilities for patient care, were exposed to common conditions, and had to discuss patients with the doctors. Supervision was provided to ensure: ‘[ S ]afety netting.’ (FGD2, Student 3, 25/11) ‘… and quality patient care but at a distance.’ (SSI4, SK, 26/11); so that they could make meaningful decisions about the management of patients. Students felt that: ‘[ Y ]ou had supervision, but the independence gives you that room to learn on your own.’ (FGD3, Student 3, 25/11); which facilitated learning but ensured that patient safety was never compromised and that they were accountable for the patient care. When students were encouraged to get involved, given meaningful responsibility, trusted, supervised and felt valued and part of the team, it created a supportive environment for experiential learning to flourish. They felt that their work mattered and that they were making a meaningful contribution. This responsibility and accountability encouraged students to find the answers as: ‘[ W ]hen you’re dealing with a patient, and you don’t know the diagnosis, or the treatment, you go on the internet, or you go on the essential drug list (EDL), look at the treatment, look at the symptoms, okay this is how it presented, this is what it is, this is what I must do.’ (FGD3, Student 2, 25/11) Generalism and the reality of context Students recognised that there were factors in the generalist setting at the hospital, the local clinics and in the community that contributed to their learning. District hospitals are by definition generalist and students found the exposure to people-focussed care transformative as they learned about the clinical, social and contextual factors contributing to illness: ‘I remember my first day in outpatients department (OPD) – the first patient had arthritis. It was like, okay – I got this. I remember my stuff. Second patient was uncontrolled hypertension, third patient had vaginal discharge syndrome (VDS). I was like, Okay, This is starting to get a bit out of hand. And next patient came back for a review of his X-ray. I’m not sure what’s going on. The next patient is chronic diarrhoea you know. You are constantly on alerts. You’re constantly learning all the different aspects of medicine.’ (SSI1, PK, 25/11) First contact with patients in casualty and OPD was an important stimulus for learning and developing confidence as we (would): ‘[ C ]lerk and examine the patient and come up with an assessment and plan for the patient and after that discuss it with the senior doctor. And then they will approve and also add some things if they think we missed some things with the management. In other blocks the management is already there in rural you do it yourself, so you really gain a lot, you really gain a lot of confidence going into internship.’ (SSI3, SK, 28/11) Not: ‘[ J ]ust needing to push the line, meant that students could go to the Doctor in casualty and ask, “I’m not too sure about this patient.” And that Doctor would actually take time out and come see the patient in OPD.’ (FGD3, Student 1, 25/11) The students experienced person-centred care (as opposed to disease-centred care) which was holistic as the staff were concerned about the context of the patient (does this patient have electricity at home to keep their insulin safe) and humanized medicine. Staff were treating the patient as a person (FGD2, Student 1, 25/11). Doing home visits with the Ward Based Outreach Teams exposed students to the socioeconomic realities that patients experience every day and: ‘[ W ]as a big part of my learning. Going out with the mobile clinical and the nurses that was very beneficial, because going into the actual community, we are seeing first-hand what is happening, and that was a big learning environment personally, because I had not seen poverty at that level. You know that there is poverty, but you not experiencing it first hand and the home visits were very eye opening … The one patient that I saw he was staying in an attached room, and there was nothing there, there was no toilet, there nothing besides a bed, and it ended up that he needed emergency care, and we actually took him with us back to the hospital because he needed emergency care.’ (SSI2, RH, 28/11) ‘I mean when patients come to you, you’re viewing the disease process. … [ W ]e can understand why a patient will come in with maybe gastro, and then go back home. And then suddenly they’re returning with the same issue because they don’t have a clean water supply, so they’re obtaining their water from the river, and then they’re washing their clothes in a pond … I’m telling them, you know, to do these things, you know to be hygienic, and you know, you can instruct them from the hospital, but the understanding only comes from when you are out there in the community, and seeing what they actually have. I think that was a big part of my learning.’ (SSI2, RH, 28/11) To better understand whole person medicine, students were required to visit a traditional healer to explore his/her understanding of health. For many students this was: ‘[ M ]y first time. I needed to find an understanding of (why) patients are going to a traditional healer before coming to the hospital, to understand it and build a relationship. I mean the traditional healer has been there for years and built a relationship with the patient and the patient’s family.’ (FGD3, Student 1, 25/11) Being part of the team – Teaching in context Students learnt the value of: ‘[ W ]orking with the different disciplines (which helped me) gain an understanding of what exactly it is they do.’ (SSI2, RH, 28/11) Although different professions often work in parallel, interprofessional care has been shown to improve patient outcomes and at the DH students were able to see the value of interprofessional collaboration in the care of patients as: ‘[ T ]here was a lot more team work as well as interprofessional collaboration … I saw interaction between the MDT and doctor. So the doctor would see the patient and the dietitian or whoever would be there would assess the patient and they’d discuss it face to face and then make their notes, come up with a plan.’ (FGD2, Student 3, 25/11) ‘Yeah, it’s just multi-disciplinary, so it was a good environment because every Thursday we had grand round meetings where there would be sharing of information from the radiographers, the physios and the dietitians. So, like there is continuous learning at the weekly meetings.’ (FGD3, Student 4, 25/11) Staff at the DH were keen and willing to teach, actively encouraged student’s participation, even calling them to see interesting patients – with students remembering: ‘[ T ]he one Doctor at casualty yoh she was teaching us everything, like all the skills. There’s a patient to suture, come, there’s an ICD, come, another [lumbar] puncture, come.’ (FGD3, Student 3, 25/11) ‘The MO on call at casualty with us was very enthusiastic to teach us this new skill. He was very knowledgeable and well experienced, very patient because we were making a lot of mistakes after he demonstrated it like once, but he was very patient and after each mistake he would give us constructive feedback, you know tell us where we’re going wrong, how we can improve, very supportive and complementary when we did do it well.’ (FGD1, Student 6, 25/11) In addition to the informal clinical teaching, staff mentored them: ‘… noticed gaps and were willing to fill in the gaps that you are lacking.’ (FGD2, Student 1, 25/11); and provided role models in terms of teachability, continuous learning, professional communication and creating a safe environment in which mistakes could be acknowledged and learnt from: ‘Yes, so they told us, feel free to ask me any questions, (and) if I’m not sure of the answer myself, I will go and check it up, and we will discuss [ it ] tomorrow. So, they were also like open to me asking questions.’ (SSI2, RH, 28/11) The staff at the hospital created an environment in which students could ask questions and learn without feeling like a failure. The staff had the knowledge without the ego and without the toxicness (FGD2, Student 3, 25/11) which they had experienced at some of the central teaching hospitals. Students felt that: ‘[ T ]he environment allowed us to actually question things a lot more, have these kind of discussions with the doctors and say I’m not sure I don’t know or even advocate for a patient.’ (FGD2, Student 3, 25/11) Students felt that: ‘[ E ]ven when I made mistakes, I made a lot of mistakes, I wasn’t reprimanded as such, but I was given constructive criticism which was very vital for my growth.’ (FGD3, Student 3, 25/11 ‘They created an environment that was safe enough for us to be silly, like ask stupid questions. It was based on the foundation of respect, we felt respected and seen. In Durban we’re treated like we’re just after sharps containers, then there’s us, that’s how I felt. So, there we were seen, we were heard, and it made the environment easy for us to communicate our shortfalls and the gaps in our knowledge and how we wanted to be assisted. So it promoted a culture of reading, it promoted a culture of conversation and I felt that instead of regurgitation of information we were thinking.’ (SSI4, SK, 26/11) Senior management made a concerted effort to ensure that: ‘[ E ]veryone’s on the same level … doctors aren’t treated any special to nurses so everyone is treated with the same level of respect and I think that contributes also to the way doctors treat or teach us students.’ (FGD2, Student 4, 25/11) Even when a patient died following a failed resuscitation, there was opportunity to discuss and learn from that experience as: ‘[ T ]hey handled it when I spoke about (the failed resuscitation). They told me that this does happen – they reassured me that you know you need to take this as a learning opportunity because you’re gonna be dealing with this next year – they understood because it’s not the first time that it happened these things occur.’ (FGD2, Student 3, 25/11) Structured learning activities (CME, journal clubs, morbidity and mortality meetings) at the hospitals meant that students: ‘… benefited a lot from those meetings in the morning because they would present different topics.’ (FGD1, Student 2, 25/11) Patient presentations and critique of the management meant that: ‘[ I ]f they’ve made any mistakes (they learn) what they should do better, so they were encouraging.’ (FGD1, Student 5, 25/11) In addition, students were encouraged to participate because: ‘[ S ]tudents … you guys are still fresh with the theory. Just please tell us what the latest thing about this and this. So that was actually quite good. It was a safe space.’ (SSI1, PK, 25/11) However, this was not the case at every hospital with students recognising that at some hospitals: ‘[ T ]he environment wasn’t very safe or comfortable as the senior doctors were very critical of anyone who did provide any feedback.’ (FGD1, Student 6, 25/11) This had the effect of stifling discussion and opportunities to learn: ‘[ A ]s seniors are very harsh, they do ridicule you at times.’ (FGD1, Student 6, 25/11) In addition not all the MO’s were equally willing to spend time teaching students. Although there were opportunities for students to get involved in patient management their experiences was that: ‘[ S ]ometimes the doctors would just suture, they would not teach you that there’s interrupted suture, there’s continuous suture, there’s these type of knots.’ (FGD1, Student 2, 25/11) Managing the learning environment – Design of the module There were important aspects in the module design aimed at maximising the learning opportunities in the environment where students are placed. These included smaller number, identifying suitable trainings sites and communicating clearly the expectations the module would place on students. University of KwaZulu Natal has identified 16 DHs in KZN where students can do the rural block and place small groups of 2–4 students in each site that is being used. The smaller number of students to medical staff meant that: ‘[ T ]here was no standing back .’ (FGD2, Student 3, 25/11) ‘[ T ]his is your opportunity to shine, your opportunity to learn. As the teacher to student ratio made a big difference (which meant) that our intakes were one on one, one student with one doctor, so we got to be a lot more hands on, we got to manage a patient from start to finish just with some oversight, some supervision.’ (FGD2, Student 3, 25/11) ‘[ W ]e got the attention we felt we were needed and that pushed me to learning even more.’ (SSI4, SK, 26/11) Smaller numbers also meant that: ‘[ B ]ecause it was just the two of us the doctors ended up knowing us – so if there was something they’d know, “ah call the students they might like to see this, call the students we have an ascitic tap, call the students.” – I think that allowed us more exposure.’ (FGD4, Student 5, 25/11) Suitable training sites are essential and it is important to recognise that not all staff are interested or willing to teach. Students at sites where they perceived that they were not valued, where staff suggested that: ‘They could do something better with (their) time.’(FGD1, Student 6, 25/11) Found the experience: ‘A bit discouraging.’ (FGD1, Student 6, 25/11) Which had a negative effect on their learning: ‘Yeah, also while I was in the wards I did have a few encounters with doctors who would just tell me – “[ Y ]ou know you don’t have to be here, you can leave you know.” They just didn’t want to have you around. They didn’t see any value in you being present in the wards. So, I would go, I would introduce myself, I would offer to do the work for them, clerk patients, present to them but yeah they weren’t interested they thought that like I could do something better with my time.’ (FGD1, Student 5, 25/11) Despite willingness of the staff to have students and to teach and supervise students, it is essential that the university communicates clearly to ensure that all hospital staff are aware that students are coming and what students should be involved with. Without good communication students may not be expected: ‘Unfortunately, at XXX, the meetings were there, but for the first three weeks we didn’t get to attend the meetings. The first one they chased us out, and we tried to explain that it is part of our role that we need to attend these meetings, it seems like they are not aware of the structure of our rotation they said no we are not allowed. And our supervisor was on leave at that time, so we had no one. But, when she came back, we discussed it with her, and they actually allowed us in.’ (FGD1, Student 1, 25/11) The Department of Family Medicine has structured the block around the principles of experiential learning rather than didactic teaching and for most students the learning process did not feel like an academic exercise. Students are used to: ‘[ L ]ectures on a Monday morning, journal clubs on a Wednesday afternoon – we want to be presenting. But when it comes to district hospitals its not academic but focuses on patient management.’ (SSI1, PK, 25/11) Students also felt that the structure of the programme during the Family Medicine rotation, facilitated their learning. There were weekly assignments that were linked to areas of the learning environment and process that they needed to complete which: ‘[ P ]ushes you that every week you have to study something and on Sunday you had to submit an assignment. So that made us keep on reading, and facilitated learning goals and learning objectives for us the whole week.’ (FGD3, Student 3, 25/11) Learning is a dynamic process that needs active participation. The DHs provided the context but there was a need for students to actively participate in that process – to be willing to learn, to actively participate in the opportunities provided, and be willing to take responsibility for the work that was entrusted to them. In this context, the learning opportunities are: ‘[ D ]ependent on you and what you make of it.’ (FGD2, Student 2, 25/11) ‘… and that students needed to be willing to learn and seek information and ask, as someone who shows enthusiasm encourages the teacher.’ (FGD2, Student 2, 25/11) Although this responsibility was initially: ‘[ V ]ery daunting because you feel a little overwhelmed.’ (SSI2, RH, 28/11) A willingness to actively participate helped students realise they knew things and to develop confidence in their abilities: ‘[ I ]t was a little scary at first, but it gave me a little bit more confidence as well. I said, okay, if I’m the person that needs to be doing this, this is part of my job, I’m going to have to do this. And at that moment, I took ownership – it gave me that sense of responsibility now that I get to see the patients, but because they are my responsibility that I needed to also do a really thorough job.’ (SSI2, RH, 28/11) The taking of responsibility was an important trigger for learning as: ‘[ R ]esponsibility motivates a lot more, without the responsibility there may not be that eagerness or willingness. I think you don’t comprehend the seriousness especially when it comes to medicine, you’re dealing with somebody’s life. If there’s no responsibility, if whatever you do doesn’t matter are you going to take it seriously? So, in that sense it put a lot of it into context, what our actions produce consequences.’ (FGD2, Student 3, 25/11) ‘I think it’s important that we were given the opportunity to manage someone on our own so what our primary plan is what the patient is getting so if that was slightly off, the Doctor would come and adjust it, but it allows us to reflect as well. Okay, this is what the type of Doctor I’m gonna be, this is the management plan that was actually done for this patient, so in the future I will always remember okay this Doctor corrected me by adding this on, so in the future I will never forget that.’ (FGD3, Student 2, 25/11) The structure of their rotation meant that they had first contact with patients whom they had to assess and develop a management plan for, were given responsibilities for patient care, were exposed to common conditions, and had to discuss patients with the doctors. Supervision was provided to ensure: ‘[ S ]afety netting.’ (FGD2, Student 3, 25/11) ‘… and quality patient care but at a distance.’ (SSI4, SK, 26/11); so that they could make meaningful decisions about the management of patients. Students felt that: ‘[ Y ]ou had supervision, but the independence gives you that room to learn on your own.’ (FGD3, Student 3, 25/11); which facilitated learning but ensured that patient safety was never compromised and that they were accountable for the patient care. When students were encouraged to get involved, given meaningful responsibility, trusted, supervised and felt valued and part of the team, it created a supportive environment for experiential learning to flourish. They felt that their work mattered and that they were making a meaningful contribution. This responsibility and accountability encouraged students to find the answers as: ‘[ W ]hen you’re dealing with a patient, and you don’t know the diagnosis, or the treatment, you go on the internet, or you go on the essential drug list (EDL), look at the treatment, look at the symptoms, okay this is how it presented, this is what it is, this is what I must do.’ (FGD3, Student 2, 25/11) Students recognised that there were factors in the generalist setting at the hospital, the local clinics and in the community that contributed to their learning. District hospitals are by definition generalist and students found the exposure to people-focussed care transformative as they learned about the clinical, social and contextual factors contributing to illness: ‘I remember my first day in outpatients department (OPD) – the first patient had arthritis. It was like, okay – I got this. I remember my stuff. Second patient was uncontrolled hypertension, third patient had vaginal discharge syndrome (VDS). I was like, Okay, This is starting to get a bit out of hand. And next patient came back for a review of his X-ray. I’m not sure what’s going on. The next patient is chronic diarrhoea you know. You are constantly on alerts. You’re constantly learning all the different aspects of medicine.’ (SSI1, PK, 25/11) First contact with patients in casualty and OPD was an important stimulus for learning and developing confidence as we (would): ‘[ C ]lerk and examine the patient and come up with an assessment and plan for the patient and after that discuss it with the senior doctor. And then they will approve and also add some things if they think we missed some things with the management. In other blocks the management is already there in rural you do it yourself, so you really gain a lot, you really gain a lot of confidence going into internship.’ (SSI3, SK, 28/11) Not: ‘[ J ]ust needing to push the line, meant that students could go to the Doctor in casualty and ask, “I’m not too sure about this patient.” And that Doctor would actually take time out and come see the patient in OPD.’ (FGD3, Student 1, 25/11) The students experienced person-centred care (as opposed to disease-centred care) which was holistic as the staff were concerned about the context of the patient (does this patient have electricity at home to keep their insulin safe) and humanized medicine. Staff were treating the patient as a person (FGD2, Student 1, 25/11). Doing home visits with the Ward Based Outreach Teams exposed students to the socioeconomic realities that patients experience every day and: ‘[ W ]as a big part of my learning. Going out with the mobile clinical and the nurses that was very beneficial, because going into the actual community, we are seeing first-hand what is happening, and that was a big learning environment personally, because I had not seen poverty at that level. You know that there is poverty, but you not experiencing it first hand and the home visits were very eye opening … The one patient that I saw he was staying in an attached room, and there was nothing there, there was no toilet, there nothing besides a bed, and it ended up that he needed emergency care, and we actually took him with us back to the hospital because he needed emergency care.’ (SSI2, RH, 28/11) ‘I mean when patients come to you, you’re viewing the disease process. … [ W ]e can understand why a patient will come in with maybe gastro, and then go back home. And then suddenly they’re returning with the same issue because they don’t have a clean water supply, so they’re obtaining their water from the river, and then they’re washing their clothes in a pond … I’m telling them, you know, to do these things, you know to be hygienic, and you know, you can instruct them from the hospital, but the understanding only comes from when you are out there in the community, and seeing what they actually have. I think that was a big part of my learning.’ (SSI2, RH, 28/11) To better understand whole person medicine, students were required to visit a traditional healer to explore his/her understanding of health. For many students this was: ‘[ M ]y first time. I needed to find an understanding of (why) patients are going to a traditional healer before coming to the hospital, to understand it and build a relationship. I mean the traditional healer has been there for years and built a relationship with the patient and the patient’s family.’ (FGD3, Student 1, 25/11) Students learnt the value of: ‘[ W ]orking with the different disciplines (which helped me) gain an understanding of what exactly it is they do.’ (SSI2, RH, 28/11) Although different professions often work in parallel, interprofessional care has been shown to improve patient outcomes and at the DH students were able to see the value of interprofessional collaboration in the care of patients as: ‘[ T ]here was a lot more team work as well as interprofessional collaboration … I saw interaction between the MDT and doctor. So the doctor would see the patient and the dietitian or whoever would be there would assess the patient and they’d discuss it face to face and then make their notes, come up with a plan.’ (FGD2, Student 3, 25/11) ‘Yeah, it’s just multi-disciplinary, so it was a good environment because every Thursday we had grand round meetings where there would be sharing of information from the radiographers, the physios and the dietitians. So, like there is continuous learning at the weekly meetings.’ (FGD3, Student 4, 25/11) Staff at the DH were keen and willing to teach, actively encouraged student’s participation, even calling them to see interesting patients – with students remembering: ‘[ T ]he one Doctor at casualty yoh she was teaching us everything, like all the skills. There’s a patient to suture, come, there’s an ICD, come, another [lumbar] puncture, come.’ (FGD3, Student 3, 25/11) ‘The MO on call at casualty with us was very enthusiastic to teach us this new skill. He was very knowledgeable and well experienced, very patient because we were making a lot of mistakes after he demonstrated it like once, but he was very patient and after each mistake he would give us constructive feedback, you know tell us where we’re going wrong, how we can improve, very supportive and complementary when we did do it well.’ (FGD1, Student 6, 25/11) In addition to the informal clinical teaching, staff mentored them: ‘… noticed gaps and were willing to fill in the gaps that you are lacking.’ (FGD2, Student 1, 25/11); and provided role models in terms of teachability, continuous learning, professional communication and creating a safe environment in which mistakes could be acknowledged and learnt from: ‘Yes, so they told us, feel free to ask me any questions, (and) if I’m not sure of the answer myself, I will go and check it up, and we will discuss [ it ] tomorrow. So, they were also like open to me asking questions.’ (SSI2, RH, 28/11) The staff at the hospital created an environment in which students could ask questions and learn without feeling like a failure. The staff had the knowledge without the ego and without the toxicness (FGD2, Student 3, 25/11) which they had experienced at some of the central teaching hospitals. Students felt that: ‘[ T ]he environment allowed us to actually question things a lot more, have these kind of discussions with the doctors and say I’m not sure I don’t know or even advocate for a patient.’ (FGD2, Student 3, 25/11) Students felt that: ‘[ E ]ven when I made mistakes, I made a lot of mistakes, I wasn’t reprimanded as such, but I was given constructive criticism which was very vital for my growth.’ (FGD3, Student 3, 25/11 ‘They created an environment that was safe enough for us to be silly, like ask stupid questions. It was based on the foundation of respect, we felt respected and seen. In Durban we’re treated like we’re just after sharps containers, then there’s us, that’s how I felt. So, there we were seen, we were heard, and it made the environment easy for us to communicate our shortfalls and the gaps in our knowledge and how we wanted to be assisted. So it promoted a culture of reading, it promoted a culture of conversation and I felt that instead of regurgitation of information we were thinking.’ (SSI4, SK, 26/11) Senior management made a concerted effort to ensure that: ‘[ E ]veryone’s on the same level … doctors aren’t treated any special to nurses so everyone is treated with the same level of respect and I think that contributes also to the way doctors treat or teach us students.’ (FGD2, Student 4, 25/11) Even when a patient died following a failed resuscitation, there was opportunity to discuss and learn from that experience as: ‘[ T ]hey handled it when I spoke about (the failed resuscitation). They told me that this does happen – they reassured me that you know you need to take this as a learning opportunity because you’re gonna be dealing with this next year – they understood because it’s not the first time that it happened these things occur.’ (FGD2, Student 3, 25/11) Structured learning activities (CME, journal clubs, morbidity and mortality meetings) at the hospitals meant that students: ‘… benefited a lot from those meetings in the morning because they would present different topics.’ (FGD1, Student 2, 25/11) Patient presentations and critique of the management meant that: ‘[ I ]f they’ve made any mistakes (they learn) what they should do better, so they were encouraging.’ (FGD1, Student 5, 25/11) In addition, students were encouraged to participate because: ‘[ S ]tudents … you guys are still fresh with the theory. Just please tell us what the latest thing about this and this. So that was actually quite good. It was a safe space.’ (SSI1, PK, 25/11) However, this was not the case at every hospital with students recognising that at some hospitals: ‘[ T ]he environment wasn’t very safe or comfortable as the senior doctors were very critical of anyone who did provide any feedback.’ (FGD1, Student 6, 25/11) This had the effect of stifling discussion and opportunities to learn: ‘[ A ]s seniors are very harsh, they do ridicule you at times.’ (FGD1, Student 6, 25/11) In addition not all the MO’s were equally willing to spend time teaching students. Although there were opportunities for students to get involved in patient management their experiences was that: ‘[ S ]ometimes the doctors would just suture, they would not teach you that there’s interrupted suture, there’s continuous suture, there’s these type of knots.’ (FGD1, Student 2, 25/11) There were important aspects in the module design aimed at maximising the learning opportunities in the environment where students are placed. These included smaller number, identifying suitable trainings sites and communicating clearly the expectations the module would place on students. University of KwaZulu Natal has identified 16 DHs in KZN where students can do the rural block and place small groups of 2–4 students in each site that is being used. The smaller number of students to medical staff meant that: ‘[ T ]here was no standing back .’ (FGD2, Student 3, 25/11) ‘[ T ]his is your opportunity to shine, your opportunity to learn. As the teacher to student ratio made a big difference (which meant) that our intakes were one on one, one student with one doctor, so we got to be a lot more hands on, we got to manage a patient from start to finish just with some oversight, some supervision.’ (FGD2, Student 3, 25/11) ‘[ W ]e got the attention we felt we were needed and that pushed me to learning even more.’ (SSI4, SK, 26/11) Smaller numbers also meant that: ‘[ B ]ecause it was just the two of us the doctors ended up knowing us – so if there was something they’d know, “ah call the students they might like to see this, call the students we have an ascitic tap, call the students.” – I think that allowed us more exposure.’ (FGD4, Student 5, 25/11) Suitable training sites are essential and it is important to recognise that not all staff are interested or willing to teach. Students at sites where they perceived that they were not valued, where staff suggested that: ‘They could do something better with (their) time.’(FGD1, Student 6, 25/11) Found the experience: ‘A bit discouraging.’ (FGD1, Student 6, 25/11) Which had a negative effect on their learning: ‘Yeah, also while I was in the wards I did have a few encounters with doctors who would just tell me – “[ Y ]ou know you don’t have to be here, you can leave you know.” They just didn’t want to have you around. They didn’t see any value in you being present in the wards. So, I would go, I would introduce myself, I would offer to do the work for them, clerk patients, present to them but yeah they weren’t interested they thought that like I could do something better with my time.’ (FGD1, Student 5, 25/11) Despite willingness of the staff to have students and to teach and supervise students, it is essential that the university communicates clearly to ensure that all hospital staff are aware that students are coming and what students should be involved with. Without good communication students may not be expected: ‘Unfortunately, at XXX, the meetings were there, but for the first three weeks we didn’t get to attend the meetings. The first one they chased us out, and we tried to explain that it is part of our role that we need to attend these meetings, it seems like they are not aware of the structure of our rotation they said no we are not allowed. And our supervisor was on leave at that time, so we had no one. But, when she came back, we discussed it with her, and they actually allowed us in.’ (FGD1, Student 1, 25/11) The Department of Family Medicine has structured the block around the principles of experiential learning rather than didactic teaching and for most students the learning process did not feel like an academic exercise. Students are used to: ‘[ L ]ectures on a Monday morning, journal clubs on a Wednesday afternoon – we want to be presenting. But when it comes to district hospitals its not academic but focuses on patient management.’ (SSI1, PK, 25/11) Students also felt that the structure of the programme during the Family Medicine rotation, facilitated their learning. There were weekly assignments that were linked to areas of the learning environment and process that they needed to complete which: ‘[ P ]ushes you that every week you have to study something and on Sunday you had to submit an assignment. So that made us keep on reading, and facilitated learning goals and learning objectives for us the whole week.’ (FGD3, Student 3, 25/11) The aim of the study was to explore student learning though the lens of experiential learning in a rural DH training platform. While participants recognised that learning opportunities ‘ were everywhere ’ (SSI4, SK, 26/11), recognition of gaps between prior knowledge and the experiences trigged by real life situations was an essential first step in their learning, enhanced by critical reflection of the experiences. These hands-on experiential learning opportunities stimulated students to recognise their gaps, reflect on these and to construct knowledge (learnings). The assignments of this block require reflection using the what (concrete experience), what now (what was triggered and why and what am I going to do in response to that situation) and so what (which required agency – how can I apply my learning in future situations) format which is a modification of Kolbs experiential learning cycle. Experiential learning thus relies on (student) engagement, reflection and application of knowledge. Artino et al. see this reaction to experience as the stimulation to become a self-regulated lifelong learner with goal-directed activities achieved by controlling and managing cognition (thinking), affect (emotion) and behaviour (action). This creates lifelong learners who are able to move from an ‘all knowing’ (traditional view of a doctor) to an ‘all learning’ doctor. Medical educators need to facilitate this by emphasising how students learn rather than how students are taught. The rural DH context created multiple opportunities for meaningful patient interactions which facilitated student learning about holistic, patient-centred care, social determinants of health and the role context plays in health and illness. Seeing undifferentiated patients as the first contact health care provider challenged students to be patient-centred (rather than disease centred) and to apply their knowledge in an integrated fashion (rather than siloed) when seeing patients with multiple problems. While still being supervised, students were also expected to take much greater responsibility for engaging with patients and planning their management. These were disorientating experiences as students were used to seeing patients already ‘allocated’ to specific domains and often being observers rather than actors in the management of patients. In addition, home and clinic visits gave them new insights into the context and challenged their assumptions about the validity of the advice they provided, to consider alternative solutions and appreciate the role and contribution of the multi-disciplinary team (MDT) in ensuring comprehensive care. For some students, this was the first real exposure to the social determinants of health (distance to facility, poverty, water issue, etc.), highlighting the complexity of addressing these social determinants, reducing health inequities, and barriers to health access and understanding the critical role of the MDT and a multisectoral approach to these issues, in collaboration with all the stake holders. By reflecting on their previous understanding, these disorientating dilemmas can lead to transformative learning as baseline beliefs/assumptions are challenged by these encounters as students made new meaning. This access allowed for the giving, and taking of responsibility for patient care by final-year students, which was a significant stimulus for their learning. Recognition by the students of the many learning opportunities presented, a willingness to take responsibility for their own learning and a need to apply that learning in the context of patient management stimulated authentic learning in keeping with the development as a lifelong learner. This experiential learning, enabled students to build on pre-existing knowledge as they gained competency, by participating in meaningful and authentic work, done in a supportive environment under appropriate supervision with constructive feedback. Relationships with hospital staff and a learning environment where students felt valued, respected and heard (no silly questions, we are all learning), that they were able to contribute to service delivery, recognise gaps in their knowledge and take responsibility for their learning from these experiences. Continuity of relationships between key stakeholders, clear roles and responsibilities and trust in the capabilities of students facilitated access and enabled students to integrate into the hospital team and immerse themselves in the hospital context. Meaningful participation in the activities of the hospital created symbiotic, bidirectional and reciprocal relationships, contributed to collaborative learning and co-production of knowledge between patients, students, staff and faculty. The supervision, support, role modelling and mentoring imparted by medical officers were provided because of these relationships. While efforts to create national consensus around the value of teaching and training on a distributed platform are to be commended, meaningful relationships with local stakeholders (community, hospital staff, faculty members, students) are essential as much of the significant learning is mediated through these relationships. These local relationships arising from ongoing interactions among role player are often serendipitous, difficult to achieve yet essential in reaching the expected outcome. The framework needs modification to show that relationships are all-encompassing and should be placed on the outside of the circle rather than as a small circle in the middle. The vision for distributed health professions training, which the Family Medicine Department at UKZN has embraced, is that health professional learning should be transformative, reflective, self-directed, inter professional, collaborative and peer-to-peer, socially accountable and community engaged. Findings from this study provide further evidence of the benefit of training on a distributed platform. These include practical hands-on experiences, exposure to the breath of the health care system in rural and underserved areas, providing generalist care to patients with undifferentiated problems, seeing a broad range of patients in terms of the ecology of medical care, exposure to the burden of disease relevant to the local community, insight into the social determinants of health, working with the MDT, and the possibility of rediscovering their altruism which is often lost at medical school. , , Limitations There are inherent limitations to qualitative studies as only a relatively small number of people participated in the study, and the findings cannot be generalised to all students doing the rotation or other settings where a similar module is being implemented. However, the richness of the information obtained provides important insight that deepens our understanding of how students engage in the particular learning environment. There are inherent limitations to qualitative studies as only a relatively small number of people participated in the study, and the findings cannot be generalised to all students doing the rotation or other settings where a similar module is being implemented. However, the richness of the information obtained provides important insight that deepens our understanding of how students engage in the particular learning environment. In conclusion, this study reinforces the advantages of distributed, experiential training, highlighting the positive impact of meaningful participation and transformative learning opportunities. Building long-term relationships with local health care professionals on distributed platforms and structuring learning activities to ensure active participation, which allows students to take responsibility, is essential. Universities are encouraged to provide opportunities for student rotations through distributed training platforms focussing on disorientating experiences that trigger student reflection and development into lifelong learners. |
Microbial safety and chemical characteristics of sausage coated by chitosan and postbiotics obtained from | f98834c0-aa39-4b79-9e4d-cbb7f2452ce0 | 11696564 | Microbiology[mh] | Perishable foods like fruits, vegetables, and meat products have a limited shelf-life post harvest or production. As consumers increasingly prioritize the nutrition, health, and natural products, the meat industry must produce high-quality and safe products . Meat and its derviatives are a crucial part of the modern diet, but they prone to chemical deterioration due to factors like oxidation, air, light, and temperature. Microbial growth can also cause unpleasant changes. To address this, researchers are exploring the use of biocompatible coatings and packagings with functional materials to prevent or minimize these negative reactions, offering a promising way to sustainably preserve meat products . Sausages, being a complex and emulsified mixture of meat, seasonings, and various nutrients, are susceptible to chemical reactions and microbial spoilage. Although heat treatment can eliminate pathogens, contamination during production can still pose risks of foodborne bacteria . In response to consumer demand for fewer chemical additives, the meat industry is searching natural preservatives from sources like bacteria, plants, and animals – . Additionally, due to growing environmental concerns about traditional packaging materials, there is a growing interest in biodegradable alternatives made from biopolymers, which offer a sustainable solution for sausage packaging and coating . Chitosan, a versatile biopolymer, is derived from chitin, the second-most abundant biopolymer in the world. Chitin is found in various natural sources, including the shells of mollusks and crustaceans, fungi cell walls, and insect cuticles . Chitosan shares similarities with cellulose and possesses a range of physical, chemical, and biological properties, making it a valuable material with diverse and multifunctional applications. In the food industry, chitosan is widely used as a bioactive coating due to its ability to form films, and it can be used alone or combined with other ingredients and constituents . Lactic acid bacteria (LAB) (such as Lactobacillus , Lactococcus , Pediococcus , Leuconostoc and Streptococcus ) and their by-products and metabolites have shown potential in food packaging by inhibiting and impeding the growth of pathogenic microorganisms. LAB’s antimicrobial properties come from their ability to reduce pH, prevent toxins, and produce inhibitory compounds , . Postbiotics, which are bioactive substances derived from probiotics, have gained attention as natural preservatives . These inert microbial components or their beneficial by-products have antimicrobial potency. Postbiotics are easy to use in industrial settings, and are stable under various conditions. Examples of postbiotics include bacteriocins, vitamins, peptides, and organic acids , . However, some postbiotics have limited effectiveness against certain bacteria, requiring the use of a chelating agent to enhance their antibacterial spectrum by altering the bacterial membrane permeabilizaton . Chitosan can not only act as an antimicrobial agent but also as a chelating agent, enhancing the antibacterial effects of postbiotics against a broader range of bacteria , . Postbiotics can also serve as decontaminants. Combining chitosan and postbiotics can create a synergistic effect, improving the microbial and chemical quality of sausages. This novel approach leverages the biodegradable and antimicrobial properties of chitosan, along with the antimicrobial properties of postbiotics, to inhibit microbial growth and oxidative processes in sausages , . The unique physical and chemical properties of chitosan make it an ideal choice for food coatings, and integrating postbiotics can further enhance its effectiveness. This innovative combination of chitosan and postbiotics addresses the challenges of chemical deterioration and microbial spoilage in meat products, while also meeting the growing consumer demand for natural and sustainable food coating solutions. This approach promotes environmental sustainability and provides a secure and effective preservation method for meat products, aligning with the evolving requirements of the modern food industry. Totally, research on characterizing postbiotics derived from LAB is relatively scarce. To date, there is no published research on the characterization of L. bulgaricus -derived postbiotics and its combined use with chitosan as a coating for sausages with reduced nitrite content, specifically in terms of its effectiveness against E. coli and S. aureus during cold storage. This study aims to investigate this feature and compare the results to a commercial sausage formulation with 120 mg/kg nitrite. Preparation of postbiotic obtained from L. bulgaricus To prepare the postbiotic, L. bulgaricus (Persian Type Culture Collection, Iran) was first cultured in De Man Rogosa & Sharp (MRS) broth at 37 °C for 24 h. The resulting bacterial suspension was then subjected to stirring and ultrasonication to break down the cells. The mixture was centrifuged at 4000 rpm for 15 min and filtered through a 0.45 μm filter to obtain the postbiotic solution. The final appropriate soulution of postbiotic was prepared in two concentrations of 150 and 300 mg/L, using physiological serum as the solvent , . Properties of postbioitic Antimicrobial activity The antimicrobial activity of the postbiotics was evaluated using the well diffusion method . The postbiotics were applied to agar plates that had been overlaid with E. coli (ATCC 35150) at a concentration of 0.5 McFarland turbidity standard. This standard approximately corresponds to 1.5 × 10 8 CFU/mL. To fix the inoculum concentration, E. coli suspension compared with this standard in order to adjust at desired level. The plates were then incubated at 37 °C for 24 h. The resulting inhibition zones were measured in millimeters. Chloramphenicol served as the positive control, while MRS broth was used as the negative control to validate the assay results. Antioxidant activity Stock solution of the postbiotic was prepared at 1 mg/mL. A DPPH (1,1-diphenyl-2-picrylhydrazyl radical) solution was prepared by dissolving 2.5 mg of this compound in 100 mL of methanol. Then, 100 µL of the postbiotic sample was mixed with 100 µL of Trolox solution (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid) and 3.9 mL of DPPH solution. After 30 min in an ultrasonic bath, the mixture was incubated in the dark for 45 min, and the absorbance was measured at 517 nm using a spectrophotometer (Jenway-6505 UV/Vis, UK). The percentage radical-scavenging activity (%SA) was calculated using the Eq. : 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\% {\text{SA}} = \frac{{\left[ {{\text{1}}00 \times \left( {{\text{A}}_{{{\text{control}}}} - {\text{A}}_{{{\text{sample}}}} } \right)} \right]}}{{{\text{A}}_{{{\text{control}}~}} }}$$\end{document} where A control is the absorbance of a solution with DPPH and methanol, and A sample is the absorbance of the DPPH solution in the presence of the postbiotic or the standard used, i.e. Trolox solution . Determination of volatile compounds To extract the volatile compounds from the postbiotic, 30 mL of postbiotic was twice mixed with an equal amount of ethyl acetate for 15 min. The mixture of supernatant and ethyl acetate was then separated into aqueous and organic portions and the organic portions were combined and dried using a rotary evaporator. The dried sample was then dissolved in 500 µL of methanol and then left to sit overnight at room temperature. The sample was then filtered and analyzed using gas chromatography-mass spectrometry (GC-MS) (Agilent 7890 and MS Agilent 5975, USA) to identify the volatile compounds present in the postbiotic. The oven temperature was set to 40 °C and held for 5 min to allow for the initial vaporization of the sample. The temperature of injector was 250 °C for 1 µL of the sample solution. It was used helium as the carrier gas. The oven temperature was ramped up from 40 °C to 250 °C at a rate of 10 °C per minute, then maintained at 250 °C for 5 min in order to ensure complete elution of volaite compounds. The total run time for the GC program was approximately 30 min. Compound identification was facilitated using the NIST Library and Wiley databases . Preparation of chitosan solution Low molecular weight chitosan (Sigma-Aldrich, USA) was dissolved in acetic acid to prepare 0.5% and 1% solutions. The chitosan solutions were then combined with varying amounts of postbiotics and homogenized using a magnetic stirrer for 1 h to create uniform mixtures (Table ) . Preparation of pathogenic bacteria culture E. coli (ATCC 35150) and S. aureus (ATCC 25923) were cultured in Tryptic soy broth (TSB) at 37 °C for 24 h to prepare the bacterial inoculum for sausage contamination. The cultures were then centrifuged at 4200 rpm for 10 min, and the resulting bacterial pellets were resuspended in 0.1% (w/v) peptone water. To standardize the concentration of inoculum, the pellets were resuspended and diluted to 10 mL, targeting an optical density of 0.5 at 600 nm. The optical density was measured and compared with McFarland standards to confirm an approximate concentration of 1.5 × 10 8 CFU/mL using a spectrophotometer (Jenway-6505 UV/Vis, UK). This calibration ensured consistent inoculum concentrations, enabling reliable contamination of the sausages . Production and treatment of sausage samples The sausage formulation consisted of a mixture of beef (750 g), ice (100 g), oil (30 g), sodium chloride (15 g), starch (30 g), soy protein isolate (50 g), dry milk (20 g), sodium phosphates (3 g), and nitrite (120 ppm). These ingredients are commonly used in meat processing plants in Iran. The sausages were stored at a refrigerated temperature of 4 ± 1 °C. To simulate foodborne contamination, 350 µL of a solution containing E. coli and S. aureus was spread onto the surface of the sausages . The products were then left at room temperature for 30 min to allow the bacteria to attach on the surface. The treatment solutions were prepared in two forms: test and control, as outlined in Table . These solutions were added to the heated sausage formulation before the introduction of foodborne pathogens. The treated sausages were then packaged and stored at 4 ± 1 °C. The sausages underwent various quality tests during 40 days (day of production, day 10, 20, 30, and 40), including chemical and microbial analyses as described in following sections. Chemical analysis The pH of the sausage samples was measured using a digital pH meter (Metrohm, Switzerland) after calibration with pH 4 and 7 buffer solutions. For each sample, 10 g was homogenized in 50 mL of distilled water and the pH was measured at 25 °C . To determine the moisture content, 3 g of sausage sample was dried in an oven (Behdad, Iran) at 103 °C for 5 h, then cooled in a desiccator to obtain the dry weight. The moisture content was calculated by dividing the weight difference between the initial and dry samples by the initial weight . The total fat content was analyzed according to the Iranian Standard and Industrial Research Institute guidelines (2002) . For the Total Volatile Basic Nitrogen (TVB-N) analysis, 10 g of sausage was mixed with 2 g of MgO and 200 mL of distilled water until 125 mL of distillate was collected. The distillate was titrated with 0.1 N hydrochloric acid solution, and the TVB-N content was reported in mg/100 g of sausage . Microbial analysis The mesophilic bacteria were determined by mixing 1 g of homogenized sausage sample with 9 mL of peptone water solution. Serial dilutions of the sample were then prepared and plated on Plate Count Agar (PCA). The plates were incubated at 37 °C for 48 h, and the colonies were counted on plates with 30–300 colonies. The mesophic bacteria values were expressed as Log CFU/mL . To count psychrotrophic microorganisms in the sausage samples, serial dilutions were prepared and plated on Plate Count Agar (PCA) using the spread plate method. The plates were then incubated at 7 °C for 7 days, and the psychrotrophic microorganism population was reported as Log CFU/mL . For the enumeration of molds and yeasts, Yeast Extract Glucose Chloramphenicol (YGC) agar was used. The plates were incubated at 25 °C for 3–5 days, and the colonies were counted. Yeast colonies were distinguished from mold colonies based on their morphology . To enumerate E. coli, serial dilutions were prepared in buffered peptone water, and then 1 mL of each dilution was spread on Violet Red Bile Glucose Agar (VRBGA) plates. The plates were incubated at 37 °C for 48 h, and only plates with fewer than 300 colonies were considered for enumeration . For S. aureus enumeration, decimal serial dilutions were prepared, and 1 mL of each sample was streaked onto Baird Parker egg yolk tellurite agar. The plates were incubated at 37 °C for 48 h, and the resulting colonies were counted and expressed as Log CFU/mL . Statistical analysis The experiments were conducted in duplicate over the 40-day cold storage period. The results were expressed as mean values ± standard deviation. Statistical analysis was performed using SPSS software version 18. Analysis of Variance (ANOVA) and Tukey’s test used to determine significant differences between treatment groups and sampling days at 95% confidence level. L. bulgaricus To prepare the postbiotic, L. bulgaricus (Persian Type Culture Collection, Iran) was first cultured in De Man Rogosa & Sharp (MRS) broth at 37 °C for 24 h. The resulting bacterial suspension was then subjected to stirring and ultrasonication to break down the cells. The mixture was centrifuged at 4000 rpm for 15 min and filtered through a 0.45 μm filter to obtain the postbiotic solution. The final appropriate soulution of postbiotic was prepared in two concentrations of 150 and 300 mg/L, using physiological serum as the solvent , . Antimicrobial activity The antimicrobial activity of the postbiotics was evaluated using the well diffusion method . The postbiotics were applied to agar plates that had been overlaid with E. coli (ATCC 35150) at a concentration of 0.5 McFarland turbidity standard. This standard approximately corresponds to 1.5 × 10 8 CFU/mL. To fix the inoculum concentration, E. coli suspension compared with this standard in order to adjust at desired level. The plates were then incubated at 37 °C for 24 h. The resulting inhibition zones were measured in millimeters. Chloramphenicol served as the positive control, while MRS broth was used as the negative control to validate the assay results. Antioxidant activity Stock solution of the postbiotic was prepared at 1 mg/mL. A DPPH (1,1-diphenyl-2-picrylhydrazyl radical) solution was prepared by dissolving 2.5 mg of this compound in 100 mL of methanol. Then, 100 µL of the postbiotic sample was mixed with 100 µL of Trolox solution (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid) and 3.9 mL of DPPH solution. After 30 min in an ultrasonic bath, the mixture was incubated in the dark for 45 min, and the absorbance was measured at 517 nm using a spectrophotometer (Jenway-6505 UV/Vis, UK). The percentage radical-scavenging activity (%SA) was calculated using the Eq. : 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\% {\text{SA}} = \frac{{\left[ {{\text{1}}00 \times \left( {{\text{A}}_{{{\text{control}}}} - {\text{A}}_{{{\text{sample}}}} } \right)} \right]}}{{{\text{A}}_{{{\text{control}}~}} }}$$\end{document} where A control is the absorbance of a solution with DPPH and methanol, and A sample is the absorbance of the DPPH solution in the presence of the postbiotic or the standard used, i.e. Trolox solution . Determination of volatile compounds To extract the volatile compounds from the postbiotic, 30 mL of postbiotic was twice mixed with an equal amount of ethyl acetate for 15 min. The mixture of supernatant and ethyl acetate was then separated into aqueous and organic portions and the organic portions were combined and dried using a rotary evaporator. The dried sample was then dissolved in 500 µL of methanol and then left to sit overnight at room temperature. The sample was then filtered and analyzed using gas chromatography-mass spectrometry (GC-MS) (Agilent 7890 and MS Agilent 5975, USA) to identify the volatile compounds present in the postbiotic. The oven temperature was set to 40 °C and held for 5 min to allow for the initial vaporization of the sample. The temperature of injector was 250 °C for 1 µL of the sample solution. It was used helium as the carrier gas. The oven temperature was ramped up from 40 °C to 250 °C at a rate of 10 °C per minute, then maintained at 250 °C for 5 min in order to ensure complete elution of volaite compounds. The total run time for the GC program was approximately 30 min. Compound identification was facilitated using the NIST Library and Wiley databases . The antimicrobial activity of the postbiotics was evaluated using the well diffusion method . The postbiotics were applied to agar plates that had been overlaid with E. coli (ATCC 35150) at a concentration of 0.5 McFarland turbidity standard. This standard approximately corresponds to 1.5 × 10 8 CFU/mL. To fix the inoculum concentration, E. coli suspension compared with this standard in order to adjust at desired level. The plates were then incubated at 37 °C for 24 h. The resulting inhibition zones were measured in millimeters. Chloramphenicol served as the positive control, while MRS broth was used as the negative control to validate the assay results. Stock solution of the postbiotic was prepared at 1 mg/mL. A DPPH (1,1-diphenyl-2-picrylhydrazyl radical) solution was prepared by dissolving 2.5 mg of this compound in 100 mL of methanol. Then, 100 µL of the postbiotic sample was mixed with 100 µL of Trolox solution (6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid) and 3.9 mL of DPPH solution. After 30 min in an ultrasonic bath, the mixture was incubated in the dark for 45 min, and the absorbance was measured at 517 nm using a spectrophotometer (Jenway-6505 UV/Vis, UK). The percentage radical-scavenging activity (%SA) was calculated using the Eq. : 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\% {\text{SA}} = \frac{{\left[ {{\text{1}}00 \times \left( {{\text{A}}_{{{\text{control}}}} - {\text{A}}_{{{\text{sample}}}} } \right)} \right]}}{{{\text{A}}_{{{\text{control}}~}} }}$$\end{document} where A control is the absorbance of a solution with DPPH and methanol, and A sample is the absorbance of the DPPH solution in the presence of the postbiotic or the standard used, i.e. Trolox solution . To extract the volatile compounds from the postbiotic, 30 mL of postbiotic was twice mixed with an equal amount of ethyl acetate for 15 min. The mixture of supernatant and ethyl acetate was then separated into aqueous and organic portions and the organic portions were combined and dried using a rotary evaporator. The dried sample was then dissolved in 500 µL of methanol and then left to sit overnight at room temperature. The sample was then filtered and analyzed using gas chromatography-mass spectrometry (GC-MS) (Agilent 7890 and MS Agilent 5975, USA) to identify the volatile compounds present in the postbiotic. The oven temperature was set to 40 °C and held for 5 min to allow for the initial vaporization of the sample. The temperature of injector was 250 °C for 1 µL of the sample solution. It was used helium as the carrier gas. The oven temperature was ramped up from 40 °C to 250 °C at a rate of 10 °C per minute, then maintained at 250 °C for 5 min in order to ensure complete elution of volaite compounds. The total run time for the GC program was approximately 30 min. Compound identification was facilitated using the NIST Library and Wiley databases . Low molecular weight chitosan (Sigma-Aldrich, USA) was dissolved in acetic acid to prepare 0.5% and 1% solutions. The chitosan solutions were then combined with varying amounts of postbiotics and homogenized using a magnetic stirrer for 1 h to create uniform mixtures (Table ) . E. coli (ATCC 35150) and S. aureus (ATCC 25923) were cultured in Tryptic soy broth (TSB) at 37 °C for 24 h to prepare the bacterial inoculum for sausage contamination. The cultures were then centrifuged at 4200 rpm for 10 min, and the resulting bacterial pellets were resuspended in 0.1% (w/v) peptone water. To standardize the concentration of inoculum, the pellets were resuspended and diluted to 10 mL, targeting an optical density of 0.5 at 600 nm. The optical density was measured and compared with McFarland standards to confirm an approximate concentration of 1.5 × 10 8 CFU/mL using a spectrophotometer (Jenway-6505 UV/Vis, UK). This calibration ensured consistent inoculum concentrations, enabling reliable contamination of the sausages . The sausage formulation consisted of a mixture of beef (750 g), ice (100 g), oil (30 g), sodium chloride (15 g), starch (30 g), soy protein isolate (50 g), dry milk (20 g), sodium phosphates (3 g), and nitrite (120 ppm). These ingredients are commonly used in meat processing plants in Iran. The sausages were stored at a refrigerated temperature of 4 ± 1 °C. To simulate foodborne contamination, 350 µL of a solution containing E. coli and S. aureus was spread onto the surface of the sausages . The products were then left at room temperature for 30 min to allow the bacteria to attach on the surface. The treatment solutions were prepared in two forms: test and control, as outlined in Table . These solutions were added to the heated sausage formulation before the introduction of foodborne pathogens. The treated sausages were then packaged and stored at 4 ± 1 °C. The sausages underwent various quality tests during 40 days (day of production, day 10, 20, 30, and 40), including chemical and microbial analyses as described in following sections. The pH of the sausage samples was measured using a digital pH meter (Metrohm, Switzerland) after calibration with pH 4 and 7 buffer solutions. For each sample, 10 g was homogenized in 50 mL of distilled water and the pH was measured at 25 °C . To determine the moisture content, 3 g of sausage sample was dried in an oven (Behdad, Iran) at 103 °C for 5 h, then cooled in a desiccator to obtain the dry weight. The moisture content was calculated by dividing the weight difference between the initial and dry samples by the initial weight . The total fat content was analyzed according to the Iranian Standard and Industrial Research Institute guidelines (2002) . For the Total Volatile Basic Nitrogen (TVB-N) analysis, 10 g of sausage was mixed with 2 g of MgO and 200 mL of distilled water until 125 mL of distillate was collected. The distillate was titrated with 0.1 N hydrochloric acid solution, and the TVB-N content was reported in mg/100 g of sausage . The mesophilic bacteria were determined by mixing 1 g of homogenized sausage sample with 9 mL of peptone water solution. Serial dilutions of the sample were then prepared and plated on Plate Count Agar (PCA). The plates were incubated at 37 °C for 48 h, and the colonies were counted on plates with 30–300 colonies. The mesophic bacteria values were expressed as Log CFU/mL . To count psychrotrophic microorganisms in the sausage samples, serial dilutions were prepared and plated on Plate Count Agar (PCA) using the spread plate method. The plates were then incubated at 7 °C for 7 days, and the psychrotrophic microorganism population was reported as Log CFU/mL . For the enumeration of molds and yeasts, Yeast Extract Glucose Chloramphenicol (YGC) agar was used. The plates were incubated at 25 °C for 3–5 days, and the colonies were counted. Yeast colonies were distinguished from mold colonies based on their morphology . To enumerate E. coli, serial dilutions were prepared in buffered peptone water, and then 1 mL of each dilution was spread on Violet Red Bile Glucose Agar (VRBGA) plates. The plates were incubated at 37 °C for 48 h, and only plates with fewer than 300 colonies were considered for enumeration . For S. aureus enumeration, decimal serial dilutions were prepared, and 1 mL of each sample was streaked onto Baird Parker egg yolk tellurite agar. The plates were incubated at 37 °C for 48 h, and the resulting colonies were counted and expressed as Log CFU/mL . The experiments were conducted in duplicate over the 40-day cold storage period. The results were expressed as mean values ± standard deviation. Statistical analysis was performed using SPSS software version 18. Analysis of Variance (ANOVA) and Tukey’s test used to determine significant differences between treatment groups and sampling days at 95% confidence level. Antimicrobial activity of L. bulgaricus postbiotic In te current study, it was examined L. bulgaricus postbiotics at 150 and 300 mg/L concentrations, chosen for their proven antioxidant and antibacterial effects in preliminary trial (Fig. ). The tested concentrations effectively inhibited E. coli growth, with the highest inhibition zones observed at both levels. This indicates a promising starting point for evaluating postbiotics’ antimicrobial potential without causing adverse effects from high concentrations. Using moderate levels helps establish a dose-response relationship, balancing antibacterial protection. Focusing on 150 mg/L and 300 mg/L strikes a balance between practical application, safety, and research opportunities. Future studies can explore higher concentrations to better understand the underlying mechanisms and potential enhancements in antioxidant and antibacterial activities. Postbiotics have shown antimicrobial effects on harmful and spoilage bacteria according to Moradi et al. . The antimicrobial activity of L. bulgaricus postbiotics at concentrations of 150 and 300 mg/L against E. coli was examined in the study using the well diffusion method. Findings demonstrated a clear link between the concentration of postbiotics and their ability to inhibit growth. The largest inhibition zone was 6.61 ± 0.30 mm for a postbiotic concentration of 150 mg/L and 9.66 ± 0.19 mm for 300 mg/L. The primary postbiotic compounds from lactobacillus species consist of ribosomally produced peptides, like bacteriocins, and metabolic by-products with various chemical compositions, such as lactic acid, organic acids, hydrogen peroxide (H2O2), diacetyl, acetoin, and phenolic compound , . This situation suggests that these compounds exhibit antimicrobial properties, which can be considered a valid justification. In another investigation conducted by Rasouli and colleagues (2021), meat samples wrapped in films containing postbiotics showed lower levels of pathogens compared to samples without a film coating . The decrease in bacteria numbers shows the promise of postbiotics in averting infections and food decay . In addition, Wang et al., (2019) employed bacteriocins derived from L. plantarum LPL-1 to combat Listeria monocytogenes . The findings indicated that bacteriocins can prevent the growth of Listeria monocytogenes by acidifying the cell membrane and forming pores in the bacterial membrane . Antioxidant capacity of L. bulgaricus postbiotics Different types of bioactive metabolites produced by probiotics (postbiotics) have shown antioxidant properties . The DPPH radical-scavenging method was used to measure the antioxidant activity of L. bulgaricus postbiotics, showing antioxidant percentages of 48.50% and 45.75% at concentrations of 150 and 300 mg/L. This method relies on the ability of antioxidants to neutralize the DPPH radical, causing a visible color shift from purple to yellow. By measuring this color change, the antioxidant capacity of a sample can be quantitatively assessed, reflecting its ability to quench free radicals and donate hydrogen atoms.The antioxidant properties of postbiotics have significant implications in combating oxidative stress, a condition linked to various diseases such as cardiovascular disorders, neurodegenerative diseases, and certain cancers. These compounds effectively neutralize free radicals, which contribute to oxidative damage and disease progression. Furthermore, incorporating probiotics and their postbiotic metabolites into functional foods presents a promising approach to prevent oxidative stress-related diseases, promoting overall health through dietary intervention. Consuming probiotic-rich foods or those containing postbiotic compounds may provide protective effects against these diseases, advocating for their integration into a healthy diet . Chang et al. (2021) examined the antioxidant effects of postbiotics derived from six types of probiotic L. plantarum strains (specifically, RG11, RG14, RI11, RS5, TL1, and UL4). Findings showed that postbiotics are effective in decreasing protein and lipid oxidation by inhibiting radicals. The formulated media experienced an increase in hydroxyl radical scavenging activity. RI11 exhibited the greatest reducing power activity among the tested postbiotics but showed no significant distinction from RG14 and UL4 . Moreover, in another study, it was examined the antioxidant capabilities of supernatants derived from different strains of lactobacillus , demonstrating their effectiveness in preventing DPPH radicals. Various factors, such as metal ions and preexisting antioxidant metabolites, were responsible for the variations in antioxidant activity among different strains . Volatile compounds in L. bulgaricus postbiotics GC-MS was employed to detect volatile components in L. bulgaricus postbiotics. GC efficiently segregated elements in the mixture, while MS identified those elements. The retention time and peak area of eleven identified compounds showed variability, as illustrated in Table . Various studies have identified a variety of components and volatile profiles in postbiotics , . The volatile compound profiles of postbiotics of six strains of probiotic L. plantarum isolated from Malaysian foods were examined using GC-MS. Different organic substances, such as esters, acids, and pyrrole compounds were recognized. Postbiotics’ functional traits was different and it depended on the strains. RG11, RI11 and RS5 showed greater levels of inhibition and antioxidant properties due to their increased levels of acetic acid, caproic acid, and lactic acid. The highest variety of volatile compounds was observed in Postbiotic RI11, which was generated in a specially designed culture medium, in comparison to other versions. On the other hand, postbiotic UL4 produced with MRS control culture medium had 10 volatile compounds, whereas postbiotics RI11 and RS5 each had 9 volatile compounds. Furthermore, postbiotics RG11 and RG14 contained 8 volatile compounds, while postbiotic TL1 had 7 volatile compounds according to Chang et al. . The pH, moisute, total fat, and TVB-N values of sausage samples during the cold storage Measuring pH in meat and its products is essential for evaluating quality characteristics. Different factors like animal traits, breeding methods, processing techniques, and storage conditions impact the quality of meat and meat derivatives. During production and storage of meat, pH is a crucial indicator of quality . Table shows the changes in pH of sausage samples during the 40-day refrigeration period. The pH levels gradually dropped from samples S2 to S10 as they were stored. At first, there were noticeable variations in pH levels. Later on, there were observations of pH fluctuations by day 30, varying from 5.77 to 6.31. The samples with 300 mg/L postbiotic and 0.5% chitosan (S9) had the lowest, while the S1 sample with 120 ppm nitrite had highest pH values. By the end of storage, the pH continued to decrease, with the 300 mg/L postbiotic sample (S6) measuring 5.53 on day 40. While stored at cold temperatures, sausages may experience a drop in pH caused by different factors, like the conversion of carbohydrates into lactic acid and acetic acid by certain LAB such as Lactobacillus , Streptococcus , Pediococcus , etc , . The quick drop in pH from the acidulants added causes proteins to release water. This phenomenon results in a condition that is not appropriate for the proliferation of harmful microorganisms . The application of different techniques for processing and storing can cause changes in the physical and chemical properties of meat products. Physical changes refer to alterations in the texture and structure that impact sensory qualities such as volume, appearance, color, texture, aroma, and taste. These alterations, like decreasing surface moisture through dehydration and preserving fats, boost protein performance and enhance functional characteristics as a result of various compound interactions . Modern meat technology is founded on the muscle tissue’s capacity to either retain or release water. Comprehending how water is absorbed and retained in raw meat allows for proper manipulation of its functional and technological traits to reach specific goals. The presence of water in meat and meat-based products has a noticeable effect on the sensory, structural, and mechanical characteristics of the raw materials, as well as on the quality of the products and their shelf life . Table illustrates the variations in moisture levels in sausage samples throughout the period of cold storage. Findings showed a decrease in moisture levels in all samples as time passed, with noticeable differences between the moisture levels at the beginning and end of the study. A decrease in moisture content happens in sausage when stored in cold conditions due to moisture vapor moving from the sausage surfaces to the surrounding cold air because of a difference in water vapor pressure . Fat is a highly diverse raw ingredient and a vital quality attribute in processed meat products like sausages. It is important in creating meat emulsions alongside other ingredients, and impacts the sensory and textural qualities, such as flavor, dryness, and tenderness, of meat-based products. Therefore, it is important to comprehend the impact of alterations in fat quality on the overall quality of sausages . Changes in fat composition can have adverse effects on taste, appearance, moisture retention, health benefits, and safety of meat, impacting consumer choices. Glycerolipids are made up of monoglycerides, diglycerides, and triglycerides according to Tagrida et al. (2022), with triglycerides being the most important category. Glycerophospholipids are molecules consisting of fatty acids linked to a glycerol molecule, with a phosphatidyl ester located at the end carbon . The results of examining the sausage samples’ fat content during cold storage, as shown in Table , revealed a total fat content range of 22.32% (S10, day of production) to 22.66% (S2, day 40). A small rise in overall fat content was noticed in all samples during the storage period. In the same way, variations in the fat levels at each stage of sausage manufacturing process were examined. The fat percentage in raw ground meat from prototype and two traditionally made sausages was 19.3, 17.9, and 18.1, respectively. After that, the level of fat gradually went down during the technological process, especially following roasting and cooking . TVB-N analysis is a popular way to evaluate the quality of meat and meat-based products. In general, the decomposition of nitrogen-containing proteins from spoilage processes such as microbial activity leads to the build-up of organic amines, referred to as TVB-N. These include unstable and harmful nitrogen compounds, like primary, secondary, and tertiary amines. Compounds like methylamines, which are biogenic amines, have the ability to change the color and taste of meat products like sausages, impacting how well they are received by consumers. The TVB-N levels in meat products typically rise with storage duration, showing comparable patterns to spoilage markers such as microbial growth and sensory alterations , . Table demonstrates the variations in TVB-N values of the sausage samples over the duration of cold storage. During the forty-day storage period, there was a significant rise in TVB-N values. During the day of production, all samples had TVBN values of 21%. By the tenth day, there was a noticeable rise with values reaching a peak of 25.86 ± 0.11% in S1 to S10. On the twentieth day, there were no notable differences ( p < 0.05) between the treatment groups, with rates ranging from 27.43 ± 0.2% (S1) to 29.7 ± 0.72% (S2). On the following day (thirthiest day), TVB-N levels in sausage samples varied from 31.40 ± 0.17% (S1) to 33.26 ± 0.11% (S4), decreasing compared to the control group (41.23 ± 0.07). Ultimately, at the end of the cold storage period, there was a significant rise in TVB-N levels, reaching 36.35 ± 0.17% (S2). These results are consistent with research conducted by Hua et al. (2022) on fish fillets, which found a notable increase in TVB-N levels during cold storage . Covering fish fillets with sodium alginate, probiotics and postbiotics led to decreased TVB-N levels, keeping them under the 25 mg/100 g limit during the entire 9-day storage period. Sun et al., (2019) examined how the TVB-N levels of Harbin dry sausages changed during storage when a mixture of Staphylococcus xylosus and L. plantarum was used as a starter culture along with vacuum packaging. The findings indicated that the TVB-N levels in non-inoculated sausages were higher than those in inoculated sausages, possibly due to the starter culture in dry sausages hindering the growth of certain spoilage microorganisms through competitive inhibition or bacteriocin production, leading to a decrease in TVB-N formation. The TVB-N values consistently rose during storage in all samples, possibly due to the bacterial enzymes’ activity . Microbial assessment including mesophilic bacteria, psychrotrophic, mould, yeast, E. coli and S. aureus of sausage samples during the cold storage Sausages are highly perishable and require refrigeration or freezing to maintain their quality. The shelf life of sausages largely depends on the initial balance of microorganisms and how they develop during storage, which is heavily influenced by temperature. Refrigeration conditions, in particular, can stimulate the growth of certain microorganisms, especially those that thrive in cold temperatures . Figure depicts the changes in the mesophilic bacteria of sausage samples over 40 days of refrigerated storage. The mesophilic bacteria values increased steadily over time, ranging from 2.23 to 9.93 Log CFU/mL (Control sample). Initially, all samples had similar mesophilic bacteria values, but by day 10, the sample treated with 1% chitosan and 300 mg/L post-biotic (S10) had the lowest mesophilic bacteria (3.85 Log CFU/mL), while the control sample had the highest mesophilic bacteria (5.89 Log CFU/mL). By day 20, the sample treated with 1% chitosan and 300 mg/L postbiotic (S10) had the lowest mesophilic bacteria (4.86 Log CFU/mL), while the control sample had the highest (6.85 Log CFU/mL). This trend continued on day 30, with S10 having the lowest mesophilic bacteria (5.01 Log CFU/mL) and the control having the highest (8.10 Log CFU/mL). By the end of the 40-day storage period, S10 still had the lowest mesophilic bacteria (6.12 Log CFU/mL), while the control had the highest (9.93 Log CFU/mL). Overall, the combination of 1% chitosan and 300 mg/L postbiotic was found to be the most effective in inhibiting microbial growth, suggesting that this combination can be used to reduce the population of pathogenic bacteria in sausages. Dalvandi et al. (2020) studied the effect of vacuum packaging and edible coatings containing black pepper seeds and turmeric extract on the shelf life of chicken breast fillets during refrigerated storage. The results indicated a gradual increase in aerobic thermophilic bacteria across all samples over time. However, vacuum-sealed samples had significantly lower bacterial counts (around 0.8 Log CFU/mL) compared to air-packed control samples, which exceeded 6 Log CFU/mL by the 4th day. The edible coatings had no significant impact on microbial growth during the 12-day storage period, suggesting that vacuum packaging was the primary factor in inhibiting bacterial growth . Psychrotrophic bacteria, such as Pseudomonas , Aeromonas , Schwanella , and Flavobacterium , are major contributors to food spoilage at refrigerated temperatures. Figure shows the growth of psychroterophilic microorganisms in sausage samples during cold storage. The results indicate a significant increase in psychrotrophic bacteria over time, with counts ranging from 2.11 Log CFU/mL (control sample, day production) to 7.53 Log CFU/mL (control sample, day 40). Notably, the treatment with 300 mg/L postbiotic and 1% chitosan (S10) reduced psychrotrophic bacterial growth by 2.27 log CFU/mL compared to the control sample (day 40), suggesting its potential as a spoilage inhibitor. Shahrampour and Razavi (2023) investigated the effect of a lemon root gum coating with rosemary essential oil nanoemulsions on the shelf life of chicken meat. It was found that psychrotrophic bacteria counts on chicken fillet surfaces increased significantly, particularly in the control sample, during refrigerated storage. By day 8, the control sample had reached 7 Log CFU/mL of psychrotrophic bacteria, whereas samples coated with lemon root gum maintained lower bacterial counts, remaining below the threshold even after 12 days of storage . Molds and yeasts are aerobic microorganisms that can grow on the surface of sausages, with mold populations sometimes reaching high densities of 10 5 to 10 7 CFU/mL. Yeast populations in raw sausages are typically lower, ranging from 10 3 to 10 5 CFU/mL. While molds and yeasts can contribute to the flavor and preservation of sausages, they also pose risks to food safety, spoilage, and product consistency. Therefore, sausage manufacturers must carefully manage these microorganisms to produce high-quality, safe, and appealing products . Figure demonstrates the changes in mold and yeast populations in sausage samples during the 40-day refrigerated storage period. The study found that treating sausage samples with chitosan and postbiotics significantly reduced fungal growth. On the production day, the sample treated with 300 mg/L postbiotic (S6) had the lowest fungal population (1.01 Log CFU/mL), while the sample treated with 0.5% chitosan and 150 mg/L postbiotic (S7) had the highest (1.42 Log CFU/mL). By day 10 of storage, fungal populations ranged from 1.4 Log CFU/mL (S9) to 3.02 Log CFU/mL (control sample). On day 20, the sample treated with 1% chitosan (S4) had the lowest fungal population (2.01 Log CFU/mL), while the control sample had the highest (4.69 Log CFU/mL). By day 30, the sample treated with 1% chitosan (S4) still had the lowest mold and yeast population (2.86 Log CFU/mL), while the control sample had the highest (5.52 Log CFU/mL). At the end of the 40-day storage period, fungal populations ranged from 3.89 Log CFU/mL (sample treated with 1% chitosan and 300 mg/L postbiotic: S10) to 6.86 Log CFU/mL (control sample). According to the Iranian national standard, sausage samples must be negative for Salmonella and E. coli . In this study, all tested sausage samples were negative for E. coli , meeting the standard. Additionally, the microbiological test results showed that the sausage samples were also negative for S. aureus . A similar study found that the postbiotic L. paracasei Postbio-P6 exhibited antimicrobial activity against various bacteria and fungi, including strong inhibition against S. aureus , Y. enteritis , and E. coli . However, common probiotics like L. plantarum , L. rhamnosus , and L. paracasei did not show inhibitory activity against certain bacteria . L. bulgaricus postbiotic In te current study, it was examined L. bulgaricus postbiotics at 150 and 300 mg/L concentrations, chosen for their proven antioxidant and antibacterial effects in preliminary trial (Fig. ). The tested concentrations effectively inhibited E. coli growth, with the highest inhibition zones observed at both levels. This indicates a promising starting point for evaluating postbiotics’ antimicrobial potential without causing adverse effects from high concentrations. Using moderate levels helps establish a dose-response relationship, balancing antibacterial protection. Focusing on 150 mg/L and 300 mg/L strikes a balance between practical application, safety, and research opportunities. Future studies can explore higher concentrations to better understand the underlying mechanisms and potential enhancements in antioxidant and antibacterial activities. Postbiotics have shown antimicrobial effects on harmful and spoilage bacteria according to Moradi et al. . The antimicrobial activity of L. bulgaricus postbiotics at concentrations of 150 and 300 mg/L against E. coli was examined in the study using the well diffusion method. Findings demonstrated a clear link between the concentration of postbiotics and their ability to inhibit growth. The largest inhibition zone was 6.61 ± 0.30 mm for a postbiotic concentration of 150 mg/L and 9.66 ± 0.19 mm for 300 mg/L. The primary postbiotic compounds from lactobacillus species consist of ribosomally produced peptides, like bacteriocins, and metabolic by-products with various chemical compositions, such as lactic acid, organic acids, hydrogen peroxide (H2O2), diacetyl, acetoin, and phenolic compound , . This situation suggests that these compounds exhibit antimicrobial properties, which can be considered a valid justification. In another investigation conducted by Rasouli and colleagues (2021), meat samples wrapped in films containing postbiotics showed lower levels of pathogens compared to samples without a film coating . The decrease in bacteria numbers shows the promise of postbiotics in averting infections and food decay . In addition, Wang et al., (2019) employed bacteriocins derived from L. plantarum LPL-1 to combat Listeria monocytogenes . The findings indicated that bacteriocins can prevent the growth of Listeria monocytogenes by acidifying the cell membrane and forming pores in the bacterial membrane . L. bulgaricus postbiotics Different types of bioactive metabolites produced by probiotics (postbiotics) have shown antioxidant properties . The DPPH radical-scavenging method was used to measure the antioxidant activity of L. bulgaricus postbiotics, showing antioxidant percentages of 48.50% and 45.75% at concentrations of 150 and 300 mg/L. This method relies on the ability of antioxidants to neutralize the DPPH radical, causing a visible color shift from purple to yellow. By measuring this color change, the antioxidant capacity of a sample can be quantitatively assessed, reflecting its ability to quench free radicals and donate hydrogen atoms.The antioxidant properties of postbiotics have significant implications in combating oxidative stress, a condition linked to various diseases such as cardiovascular disorders, neurodegenerative diseases, and certain cancers. These compounds effectively neutralize free radicals, which contribute to oxidative damage and disease progression. Furthermore, incorporating probiotics and their postbiotic metabolites into functional foods presents a promising approach to prevent oxidative stress-related diseases, promoting overall health through dietary intervention. Consuming probiotic-rich foods or those containing postbiotic compounds may provide protective effects against these diseases, advocating for their integration into a healthy diet . Chang et al. (2021) examined the antioxidant effects of postbiotics derived from six types of probiotic L. plantarum strains (specifically, RG11, RG14, RI11, RS5, TL1, and UL4). Findings showed that postbiotics are effective in decreasing protein and lipid oxidation by inhibiting radicals. The formulated media experienced an increase in hydroxyl radical scavenging activity. RI11 exhibited the greatest reducing power activity among the tested postbiotics but showed no significant distinction from RG14 and UL4 . Moreover, in another study, it was examined the antioxidant capabilities of supernatants derived from different strains of lactobacillus , demonstrating their effectiveness in preventing DPPH radicals. Various factors, such as metal ions and preexisting antioxidant metabolites, were responsible for the variations in antioxidant activity among different strains . L. bulgaricus postbiotics GC-MS was employed to detect volatile components in L. bulgaricus postbiotics. GC efficiently segregated elements in the mixture, while MS identified those elements. The retention time and peak area of eleven identified compounds showed variability, as illustrated in Table . Various studies have identified a variety of components and volatile profiles in postbiotics , . The volatile compound profiles of postbiotics of six strains of probiotic L. plantarum isolated from Malaysian foods were examined using GC-MS. Different organic substances, such as esters, acids, and pyrrole compounds were recognized. Postbiotics’ functional traits was different and it depended on the strains. RG11, RI11 and RS5 showed greater levels of inhibition and antioxidant properties due to their increased levels of acetic acid, caproic acid, and lactic acid. The highest variety of volatile compounds was observed in Postbiotic RI11, which was generated in a specially designed culture medium, in comparison to other versions. On the other hand, postbiotic UL4 produced with MRS control culture medium had 10 volatile compounds, whereas postbiotics RI11 and RS5 each had 9 volatile compounds. Furthermore, postbiotics RG11 and RG14 contained 8 volatile compounds, while postbiotic TL1 had 7 volatile compounds according to Chang et al. . Measuring pH in meat and its products is essential for evaluating quality characteristics. Different factors like animal traits, breeding methods, processing techniques, and storage conditions impact the quality of meat and meat derivatives. During production and storage of meat, pH is a crucial indicator of quality . Table shows the changes in pH of sausage samples during the 40-day refrigeration period. The pH levels gradually dropped from samples S2 to S10 as they were stored. At first, there were noticeable variations in pH levels. Later on, there were observations of pH fluctuations by day 30, varying from 5.77 to 6.31. The samples with 300 mg/L postbiotic and 0.5% chitosan (S9) had the lowest, while the S1 sample with 120 ppm nitrite had highest pH values. By the end of storage, the pH continued to decrease, with the 300 mg/L postbiotic sample (S6) measuring 5.53 on day 40. While stored at cold temperatures, sausages may experience a drop in pH caused by different factors, like the conversion of carbohydrates into lactic acid and acetic acid by certain LAB such as Lactobacillus , Streptococcus , Pediococcus , etc , . The quick drop in pH from the acidulants added causes proteins to release water. This phenomenon results in a condition that is not appropriate for the proliferation of harmful microorganisms . The application of different techniques for processing and storing can cause changes in the physical and chemical properties of meat products. Physical changes refer to alterations in the texture and structure that impact sensory qualities such as volume, appearance, color, texture, aroma, and taste. These alterations, like decreasing surface moisture through dehydration and preserving fats, boost protein performance and enhance functional characteristics as a result of various compound interactions . Modern meat technology is founded on the muscle tissue’s capacity to either retain or release water. Comprehending how water is absorbed and retained in raw meat allows for proper manipulation of its functional and technological traits to reach specific goals. The presence of water in meat and meat-based products has a noticeable effect on the sensory, structural, and mechanical characteristics of the raw materials, as well as on the quality of the products and their shelf life . Table illustrates the variations in moisture levels in sausage samples throughout the period of cold storage. Findings showed a decrease in moisture levels in all samples as time passed, with noticeable differences between the moisture levels at the beginning and end of the study. A decrease in moisture content happens in sausage when stored in cold conditions due to moisture vapor moving from the sausage surfaces to the surrounding cold air because of a difference in water vapor pressure . Fat is a highly diverse raw ingredient and a vital quality attribute in processed meat products like sausages. It is important in creating meat emulsions alongside other ingredients, and impacts the sensory and textural qualities, such as flavor, dryness, and tenderness, of meat-based products. Therefore, it is important to comprehend the impact of alterations in fat quality on the overall quality of sausages . Changes in fat composition can have adverse effects on taste, appearance, moisture retention, health benefits, and safety of meat, impacting consumer choices. Glycerolipids are made up of monoglycerides, diglycerides, and triglycerides according to Tagrida et al. (2022), with triglycerides being the most important category. Glycerophospholipids are molecules consisting of fatty acids linked to a glycerol molecule, with a phosphatidyl ester located at the end carbon . The results of examining the sausage samples’ fat content during cold storage, as shown in Table , revealed a total fat content range of 22.32% (S10, day of production) to 22.66% (S2, day 40). A small rise in overall fat content was noticed in all samples during the storage period. In the same way, variations in the fat levels at each stage of sausage manufacturing process were examined. The fat percentage in raw ground meat from prototype and two traditionally made sausages was 19.3, 17.9, and 18.1, respectively. After that, the level of fat gradually went down during the technological process, especially following roasting and cooking . TVB-N analysis is a popular way to evaluate the quality of meat and meat-based products. In general, the decomposition of nitrogen-containing proteins from spoilage processes such as microbial activity leads to the build-up of organic amines, referred to as TVB-N. These include unstable and harmful nitrogen compounds, like primary, secondary, and tertiary amines. Compounds like methylamines, which are biogenic amines, have the ability to change the color and taste of meat products like sausages, impacting how well they are received by consumers. The TVB-N levels in meat products typically rise with storage duration, showing comparable patterns to spoilage markers such as microbial growth and sensory alterations , . Table demonstrates the variations in TVB-N values of the sausage samples over the duration of cold storage. During the forty-day storage period, there was a significant rise in TVB-N values. During the day of production, all samples had TVBN values of 21%. By the tenth day, there was a noticeable rise with values reaching a peak of 25.86 ± 0.11% in S1 to S10. On the twentieth day, there were no notable differences ( p < 0.05) between the treatment groups, with rates ranging from 27.43 ± 0.2% (S1) to 29.7 ± 0.72% (S2). On the following day (thirthiest day), TVB-N levels in sausage samples varied from 31.40 ± 0.17% (S1) to 33.26 ± 0.11% (S4), decreasing compared to the control group (41.23 ± 0.07). Ultimately, at the end of the cold storage period, there was a significant rise in TVB-N levels, reaching 36.35 ± 0.17% (S2). These results are consistent with research conducted by Hua et al. (2022) on fish fillets, which found a notable increase in TVB-N levels during cold storage . Covering fish fillets with sodium alginate, probiotics and postbiotics led to decreased TVB-N levels, keeping them under the 25 mg/100 g limit during the entire 9-day storage period. Sun et al., (2019) examined how the TVB-N levels of Harbin dry sausages changed during storage when a mixture of Staphylococcus xylosus and L. plantarum was used as a starter culture along with vacuum packaging. The findings indicated that the TVB-N levels in non-inoculated sausages were higher than those in inoculated sausages, possibly due to the starter culture in dry sausages hindering the growth of certain spoilage microorganisms through competitive inhibition or bacteriocin production, leading to a decrease in TVB-N formation. The TVB-N values consistently rose during storage in all samples, possibly due to the bacterial enzymes’ activity . Microbial assessment including mesophilic bacteria, psychrotrophic, mould, yeast, E. coli and S. aureus of sausage samples during the cold storage Sausages are highly perishable and require refrigeration or freezing to maintain their quality. The shelf life of sausages largely depends on the initial balance of microorganisms and how they develop during storage, which is heavily influenced by temperature. Refrigeration conditions, in particular, can stimulate the growth of certain microorganisms, especially those that thrive in cold temperatures . Figure depicts the changes in the mesophilic bacteria of sausage samples over 40 days of refrigerated storage. The mesophilic bacteria values increased steadily over time, ranging from 2.23 to 9.93 Log CFU/mL (Control sample). Initially, all samples had similar mesophilic bacteria values, but by day 10, the sample treated with 1% chitosan and 300 mg/L post-biotic (S10) had the lowest mesophilic bacteria (3.85 Log CFU/mL), while the control sample had the highest mesophilic bacteria (5.89 Log CFU/mL). By day 20, the sample treated with 1% chitosan and 300 mg/L postbiotic (S10) had the lowest mesophilic bacteria (4.86 Log CFU/mL), while the control sample had the highest (6.85 Log CFU/mL). This trend continued on day 30, with S10 having the lowest mesophilic bacteria (5.01 Log CFU/mL) and the control having the highest (8.10 Log CFU/mL). By the end of the 40-day storage period, S10 still had the lowest mesophilic bacteria (6.12 Log CFU/mL), while the control had the highest (9.93 Log CFU/mL). Overall, the combination of 1% chitosan and 300 mg/L postbiotic was found to be the most effective in inhibiting microbial growth, suggesting that this combination can be used to reduce the population of pathogenic bacteria in sausages. Dalvandi et al. (2020) studied the effect of vacuum packaging and edible coatings containing black pepper seeds and turmeric extract on the shelf life of chicken breast fillets during refrigerated storage. The results indicated a gradual increase in aerobic thermophilic bacteria across all samples over time. However, vacuum-sealed samples had significantly lower bacterial counts (around 0.8 Log CFU/mL) compared to air-packed control samples, which exceeded 6 Log CFU/mL by the 4th day. The edible coatings had no significant impact on microbial growth during the 12-day storage period, suggesting that vacuum packaging was the primary factor in inhibiting bacterial growth . Psychrotrophic bacteria, such as Pseudomonas , Aeromonas , Schwanella , and Flavobacterium , are major contributors to food spoilage at refrigerated temperatures. Figure shows the growth of psychroterophilic microorganisms in sausage samples during cold storage. The results indicate a significant increase in psychrotrophic bacteria over time, with counts ranging from 2.11 Log CFU/mL (control sample, day production) to 7.53 Log CFU/mL (control sample, day 40). Notably, the treatment with 300 mg/L postbiotic and 1% chitosan (S10) reduced psychrotrophic bacterial growth by 2.27 log CFU/mL compared to the control sample (day 40), suggesting its potential as a spoilage inhibitor. Shahrampour and Razavi (2023) investigated the effect of a lemon root gum coating with rosemary essential oil nanoemulsions on the shelf life of chicken meat. It was found that psychrotrophic bacteria counts on chicken fillet surfaces increased significantly, particularly in the control sample, during refrigerated storage. By day 8, the control sample had reached 7 Log CFU/mL of psychrotrophic bacteria, whereas samples coated with lemon root gum maintained lower bacterial counts, remaining below the threshold even after 12 days of storage . Molds and yeasts are aerobic microorganisms that can grow on the surface of sausages, with mold populations sometimes reaching high densities of 10 5 to 10 7 CFU/mL. Yeast populations in raw sausages are typically lower, ranging from 10 3 to 10 5 CFU/mL. While molds and yeasts can contribute to the flavor and preservation of sausages, they also pose risks to food safety, spoilage, and product consistency. Therefore, sausage manufacturers must carefully manage these microorganisms to produce high-quality, safe, and appealing products . Figure demonstrates the changes in mold and yeast populations in sausage samples during the 40-day refrigerated storage period. The study found that treating sausage samples with chitosan and postbiotics significantly reduced fungal growth. On the production day, the sample treated with 300 mg/L postbiotic (S6) had the lowest fungal population (1.01 Log CFU/mL), while the sample treated with 0.5% chitosan and 150 mg/L postbiotic (S7) had the highest (1.42 Log CFU/mL). By day 10 of storage, fungal populations ranged from 1.4 Log CFU/mL (S9) to 3.02 Log CFU/mL (control sample). On day 20, the sample treated with 1% chitosan (S4) had the lowest fungal population (2.01 Log CFU/mL), while the control sample had the highest (4.69 Log CFU/mL). By day 30, the sample treated with 1% chitosan (S4) still had the lowest mold and yeast population (2.86 Log CFU/mL), while the control sample had the highest (5.52 Log CFU/mL). At the end of the 40-day storage period, fungal populations ranged from 3.89 Log CFU/mL (sample treated with 1% chitosan and 300 mg/L postbiotic: S10) to 6.86 Log CFU/mL (control sample). According to the Iranian national standard, sausage samples must be negative for Salmonella and E. coli . In this study, all tested sausage samples were negative for E. coli , meeting the standard. Additionally, the microbiological test results showed that the sausage samples were also negative for S. aureus . A similar study found that the postbiotic L. paracasei Postbio-P6 exhibited antimicrobial activity against various bacteria and fungi, including strong inhibition against S. aureus , Y. enteritis , and E. coli . However, common probiotics like L. plantarum , L. rhamnosus , and L. paracasei did not show inhibitory activity against certain bacteria . E. coli and S. aureus of sausage samples during the cold storage Sausages are highly perishable and require refrigeration or freezing to maintain their quality. The shelf life of sausages largely depends on the initial balance of microorganisms and how they develop during storage, which is heavily influenced by temperature. Refrigeration conditions, in particular, can stimulate the growth of certain microorganisms, especially those that thrive in cold temperatures . Figure depicts the changes in the mesophilic bacteria of sausage samples over 40 days of refrigerated storage. The mesophilic bacteria values increased steadily over time, ranging from 2.23 to 9.93 Log CFU/mL (Control sample). Initially, all samples had similar mesophilic bacteria values, but by day 10, the sample treated with 1% chitosan and 300 mg/L post-biotic (S10) had the lowest mesophilic bacteria (3.85 Log CFU/mL), while the control sample had the highest mesophilic bacteria (5.89 Log CFU/mL). By day 20, the sample treated with 1% chitosan and 300 mg/L postbiotic (S10) had the lowest mesophilic bacteria (4.86 Log CFU/mL), while the control sample had the highest (6.85 Log CFU/mL). This trend continued on day 30, with S10 having the lowest mesophilic bacteria (5.01 Log CFU/mL) and the control having the highest (8.10 Log CFU/mL). By the end of the 40-day storage period, S10 still had the lowest mesophilic bacteria (6.12 Log CFU/mL), while the control had the highest (9.93 Log CFU/mL). Overall, the combination of 1% chitosan and 300 mg/L postbiotic was found to be the most effective in inhibiting microbial growth, suggesting that this combination can be used to reduce the population of pathogenic bacteria in sausages. Dalvandi et al. (2020) studied the effect of vacuum packaging and edible coatings containing black pepper seeds and turmeric extract on the shelf life of chicken breast fillets during refrigerated storage. The results indicated a gradual increase in aerobic thermophilic bacteria across all samples over time. However, vacuum-sealed samples had significantly lower bacterial counts (around 0.8 Log CFU/mL) compared to air-packed control samples, which exceeded 6 Log CFU/mL by the 4th day. The edible coatings had no significant impact on microbial growth during the 12-day storage period, suggesting that vacuum packaging was the primary factor in inhibiting bacterial growth . Psychrotrophic bacteria, such as Pseudomonas , Aeromonas , Schwanella , and Flavobacterium , are major contributors to food spoilage at refrigerated temperatures. Figure shows the growth of psychroterophilic microorganisms in sausage samples during cold storage. The results indicate a significant increase in psychrotrophic bacteria over time, with counts ranging from 2.11 Log CFU/mL (control sample, day production) to 7.53 Log CFU/mL (control sample, day 40). Notably, the treatment with 300 mg/L postbiotic and 1% chitosan (S10) reduced psychrotrophic bacterial growth by 2.27 log CFU/mL compared to the control sample (day 40), suggesting its potential as a spoilage inhibitor. Shahrampour and Razavi (2023) investigated the effect of a lemon root gum coating with rosemary essential oil nanoemulsions on the shelf life of chicken meat. It was found that psychrotrophic bacteria counts on chicken fillet surfaces increased significantly, particularly in the control sample, during refrigerated storage. By day 8, the control sample had reached 7 Log CFU/mL of psychrotrophic bacteria, whereas samples coated with lemon root gum maintained lower bacterial counts, remaining below the threshold even after 12 days of storage . Molds and yeasts are aerobic microorganisms that can grow on the surface of sausages, with mold populations sometimes reaching high densities of 10 5 to 10 7 CFU/mL. Yeast populations in raw sausages are typically lower, ranging from 10 3 to 10 5 CFU/mL. While molds and yeasts can contribute to the flavor and preservation of sausages, they also pose risks to food safety, spoilage, and product consistency. Therefore, sausage manufacturers must carefully manage these microorganisms to produce high-quality, safe, and appealing products . Figure demonstrates the changes in mold and yeast populations in sausage samples during the 40-day refrigerated storage period. The study found that treating sausage samples with chitosan and postbiotics significantly reduced fungal growth. On the production day, the sample treated with 300 mg/L postbiotic (S6) had the lowest fungal population (1.01 Log CFU/mL), while the sample treated with 0.5% chitosan and 150 mg/L postbiotic (S7) had the highest (1.42 Log CFU/mL). By day 10 of storage, fungal populations ranged from 1.4 Log CFU/mL (S9) to 3.02 Log CFU/mL (control sample). On day 20, the sample treated with 1% chitosan (S4) had the lowest fungal population (2.01 Log CFU/mL), while the control sample had the highest (4.69 Log CFU/mL). By day 30, the sample treated with 1% chitosan (S4) still had the lowest mold and yeast population (2.86 Log CFU/mL), while the control sample had the highest (5.52 Log CFU/mL). At the end of the 40-day storage period, fungal populations ranged from 3.89 Log CFU/mL (sample treated with 1% chitosan and 300 mg/L postbiotic: S10) to 6.86 Log CFU/mL (control sample). According to the Iranian national standard, sausage samples must be negative for Salmonella and E. coli . In this study, all tested sausage samples were negative for E. coli , meeting the standard. Additionally, the microbiological test results showed that the sausage samples were also negative for S. aureus . A similar study found that the postbiotic L. paracasei Postbio-P6 exhibited antimicrobial activity against various bacteria and fungi, including strong inhibition against S. aureus , Y. enteritis , and E. coli . However, common probiotics like L. plantarum , L. rhamnosus , and L. paracasei did not show inhibitory activity against certain bacteria . There is a growing trend towards using natural antioxidants and antimicrobial agents in food products, including meat and meat-based products, due to increasing consumer demand. One area of focus is reducing the use of nitrites in sausages, as they can form compounds that have negative health effects. To address this, it is required to improve the quality and safety of sausages. This study investigated the effects of chitosan and L. bulgaricus postbiotic on the quality attributes of sausages during storage. Results confirmed that L. bulgaricus postbiotic has antioxidant and antimicrobial properties, effective against pathogens like E. coli and S. aureus . During cold storage, the combination of 300 mg/L postbiotic and 1% chitosan was the most effective treatment in inhibiting microbial growth. The results suggest that postbiotics, with their antioxidant and antimicrobial capabilities, can be used as a coating to prevent biological contamination in food products, offering a promising strategy for improving food safety with the reduction of the amount of nitrite. The results obtained from current study showed that treating sausage samples with chitosan and postbiotic significantly improved their quality characteristics. This approach not only enhances the quality of perishable foods like meat products but also improves food safety by reducing the presence of pathogenic bacteria, ultimately extending the product’s shelf life with reduced nitrite. |
Challenges in teacher-student communication during family medicine residency: A qualitative study | 24f56d3e-3798-40c1-a54d-4989e069e880 | 11407679 | Family Medicine[mh] | Medical residency is pivotal in specialists’ training; however, it tends to be asymmetrical towards care aspects, often deprioritizing the development of complementary skills such as communication . Several studies have shown that deficiencies in communication skills among family medicine residents and specialists are a systemic problem caused by inadequate training and time constraints, which negatively impact interactions in academic and service environments . Evaluating these skills comprehensively allows for the identification of areas for improvement , biases and disparities in interaction environments , and the need to implement specific cultural strategies to address these deficiencies . Therefore, the need to incorporate practical activities for teaching effective communication strategies has been identified in various environments where residency takes place . Despite the importance and recognition of developing these skills in different parts of the world, communication problems are common in Mexico, where cultural and educational frameworks have historically emphasized technical and clinical skills over communication abilities. As a result, content related to communication in various family medicine curricula in the country is not sufficiently explored or nonexistent . It has been emphasized that medical education should not only focus on creating experts in the diagnosis, and treatment of diseases, but also foster clear, empathetic, and compassionate communication with patients, and their families . Effective communication in medical practice has been linked to increased patient satisfaction, therapeutic adherence, and reduced hospital readmissions . Conversely, poor communication can lead to conflicts within institutions, impact on trust, damage professional relationships, and compromise the quality of care . Positive interactions between teachers and students promote motivation, engagement, retention, and student well-being, contributing to academic success . The role of communication in positive workplace relationships underscores the need to understand how communication impacts various scenarios in physicians’ professional development , and the role played by academic figures in these processes . Given these considerations, it is essential to identify communication challenges between teachers, and residents during Family Medicine residency. This insight can have significant implications for academic training and professional development. Study design and setting Multi-center qualitative study, involving Family Medicine professors and residents from the Mexican Republic. The present research was approved by the Research Ethics Committee of the Faculty of Medicine at the National Autonomous University of Mexico (UNAM); registration number: FM/DI/010/2021 and adhered to the SRQR guidelines . The study focused on exploring teacher-resident communication challenges–which included their interactions with peers, the work team, and patients–using the Flanagan’s critical incident technique (CIT) . This technique is considered valuable for understanding significant events related to professionals’ behavior in well-defined situations. Critical incidents do not necessarily involve extreme gravity or life-threatening situations; instead, they encompass occurrences that are surprising, unexpected, or disturbing to the professional, prompting a certain level of analysis . Sampling and recruitment From May 17th to October 11th, 2022, professors and third-year residents in the Family Medicine specialty from various federal entities within the Mexican Republic were invited via e-mail to participate in a Zoom session. Third-year residents were chosen for their tenure in the residency program, which exposed them to a wide range of incidents. The participating professors included associate and full professors with different years of teaching experience. During the sessions, the significance of the research was presented to sensitize, and motivate participants (rapport) . We explained aspects of the CIT, communication problems, their effects, and the importance of addressing them. The study methodology was also described. In these sessions, we emailed all attendees the informed consent form—in Word format—stating that participation was voluntary, anonymous, and would not affect their performance as resident or professors. The consent form detailed the benefits and risks of participating in the study and informed participants of their right to withdraw at any time. Additionally, participants received Word documents containing questions about sociodemographic data and open-ended questions of critical incidents regarding teacher-resident communication challenges. The elements that participants were required to address, regarding the critical incidents, are outlined in . At the end of the sessions, attendees were given one week to submit their critical incidents and signed consent forms via e-mail. Only those who chose to participate freely returned the required documents in separate files. Critical incident formats related to communication problems were included, while duplicate formats, poorly reported or incomplete, were excluded as they did not meet the descriptive elements mentioned in . Data collection and analysis The research team comprised specialists in family medicine (I H-T, ON P-A), teaching experts (I H-T, LF R-H, G L-O), communication specialists (I H-T, ON P-A, LF R-H), and doctors in sciences, and education (G L-O, LF R-H). The obtained formats were independently analyzed according to the academic role (teacher or resident) to identify the most frequent communication problems. Once the critical incidents that met the inclusion criteria were selected, the information was analyzed based on its content and contribution to each category of analysis. According to the methodology of Hughes , the following categories were previously established: organizational communication (related to documentation, information hierarchy, and institutional procedures), assertive communication (including sensitivity, empathy, problem-solving skills, or helpfulness), and effective communication (communication barriers). From the obtained data, an emergent category arose: asymmetric communication, which includes narratives about abuse of power, shouting, rudeness, and humiliating treatment. To enhance the reliability of our findings and minimize individual biases, we employed investigator triangulation involving all authors. This approach leverages diverse perspectives to enrich the analysis and confirm findings . Initially, each researcher independently reviewed the critical incident reports to confirm categories. We then convened to discuss our findings and resolve discrepancies through collaborative discussion. Subsequent reviews refined these categories and addressed any disagreements. Several meetings were held to resolve the remaining discrepancies and achieve consensus . Saturation was reached when no new themes or subthemes were identified ; this occurred when 60% of the critical incidents were analyzed. However, the analysis was completed for all critical incidents that met the selection criteria. Multi-center qualitative study, involving Family Medicine professors and residents from the Mexican Republic. The present research was approved by the Research Ethics Committee of the Faculty of Medicine at the National Autonomous University of Mexico (UNAM); registration number: FM/DI/010/2021 and adhered to the SRQR guidelines . The study focused on exploring teacher-resident communication challenges–which included their interactions with peers, the work team, and patients–using the Flanagan’s critical incident technique (CIT) . This technique is considered valuable for understanding significant events related to professionals’ behavior in well-defined situations. Critical incidents do not necessarily involve extreme gravity or life-threatening situations; instead, they encompass occurrences that are surprising, unexpected, or disturbing to the professional, prompting a certain level of analysis . From May 17th to October 11th, 2022, professors and third-year residents in the Family Medicine specialty from various federal entities within the Mexican Republic were invited via e-mail to participate in a Zoom session. Third-year residents were chosen for their tenure in the residency program, which exposed them to a wide range of incidents. The participating professors included associate and full professors with different years of teaching experience. During the sessions, the significance of the research was presented to sensitize, and motivate participants (rapport) . We explained aspects of the CIT, communication problems, their effects, and the importance of addressing them. The study methodology was also described. In these sessions, we emailed all attendees the informed consent form—in Word format—stating that participation was voluntary, anonymous, and would not affect their performance as resident or professors. The consent form detailed the benefits and risks of participating in the study and informed participants of their right to withdraw at any time. Additionally, participants received Word documents containing questions about sociodemographic data and open-ended questions of critical incidents regarding teacher-resident communication challenges. The elements that participants were required to address, regarding the critical incidents, are outlined in . At the end of the sessions, attendees were given one week to submit their critical incidents and signed consent forms via e-mail. Only those who chose to participate freely returned the required documents in separate files. Critical incident formats related to communication problems were included, while duplicate formats, poorly reported or incomplete, were excluded as they did not meet the descriptive elements mentioned in . The research team comprised specialists in family medicine (I H-T, ON P-A), teaching experts (I H-T, LF R-H, G L-O), communication specialists (I H-T, ON P-A, LF R-H), and doctors in sciences, and education (G L-O, LF R-H). The obtained formats were independently analyzed according to the academic role (teacher or resident) to identify the most frequent communication problems. Once the critical incidents that met the inclusion criteria were selected, the information was analyzed based on its content and contribution to each category of analysis. According to the methodology of Hughes , the following categories were previously established: organizational communication (related to documentation, information hierarchy, and institutional procedures), assertive communication (including sensitivity, empathy, problem-solving skills, or helpfulness), and effective communication (communication barriers). From the obtained data, an emergent category arose: asymmetric communication, which includes narratives about abuse of power, shouting, rudeness, and humiliating treatment. To enhance the reliability of our findings and minimize individual biases, we employed investigator triangulation involving all authors. This approach leverages diverse perspectives to enrich the analysis and confirm findings . Initially, each researcher independently reviewed the critical incident reports to confirm categories. We then convened to discuss our findings and resolve discrepancies through collaborative discussion. Subsequent reviews refined these categories and addressed any disagreements. Several meetings were held to resolve the remaining discrepancies and achieve consensus . Saturation was reached when no new themes or subthemes were identified ; this occurred when 60% of the critical incidents were analyzed. However, the analysis was completed for all critical incidents that met the selection criteria. Of those attending the Zoom session, 70 out of 103 professors (67.97%), and 50 out of 214 residents (23.36%) agreed to participate. A total of 224 critical incidents were collected (several participants reported more than one incident), 192 met the selection criteria (85.71%). Among these, 127 were reported by professors, of which 80 (63%) incidents were reported by women, and 47 (37%) by men. Residents reported 65 critical incidents, 32 (49.23%) described by women, and 33 (50.77%) by men. The average age for professors and residents was 42.44 years (±5.6), and 34.29 years (±5.01), respectively. presents data on sex, age, years of teaching experience, and the number of critical incidents reported. According to the critical incidents collected, four categories were confirmed: asymmetric communication, assertive communication, organizational communication, and effective communication. These categories include issues such as power dynamics, empathy, communication skills, poor internal communication, conflict resolution, among others. The frequency of these incidents was recorded for both professors and residents, highlighting the specific challenges identified by each group . Asymmetric communication Asymmetric communication refers to situations where there is an inequality in power, information, or influence among the parties involved in message transmission. This was prevalent in many identified critical incidents, negatively impacting patient care. Participants reported instances where their medical judgment was doubted by higher-ranking healthcare personnel, leading to delays in patient care. For example: “In an ongoing medical care shift, I presented a patient to the intermediate emergency area, and they refused to accept her because I was a resident. Despite explaining and justifying why the patient needed to be in that area, they refused to listen, and insisted that the attending physician had to give that indication… This significantly delayed patient care.” (Participant 28. Male resident) Communication issues in this context even jeopardized patient safety, as reported by a resident: “During my psychiatry rotation, I noticed that the treatments given by the attending physician were not consistent with the literature, and harming patients… I found a patient with signs of acute coronary ischemia, and three patients had hypertensive crises. I told the physician… he did not give the necessary attention because, for him, I was just a resident lacking experience.” (Participant 01. Female resident) Asymmetric communication also impacted interactions among residents based on their rank. Inappropriate use of hierarchy justified abuses, creating uncertainty, fostering complicated interactions, and affecting the work environment. “Verbal expressions of superiority by residents of higher grades affected group performance , and work environment . ” (Participant 30. Female professor) Within this category, critical incidents were identified where residents explicitly mentioned being subjected to degrading comments, belittlement, verbal abuse, shouting, mockery, rudeness, and intimidation by professors, attending physicians from other services, or residents of higher grades. “For instance, the professor blocked me from ’WhatsApp,’ claiming, ’It ’ s my phone, and I do what I want.’ However, she gives instructions through that media… The physician not only has poor communication with some of us, but she is rude, and threatens to fail me… She laughs at me… I sent her several documents, and she refused to accept them, saying ’I do what I want’…” (Participant 44. Female resident) In describing these critical incidents, residents expressed feeling humiliated, belittled, and insecure, even expressing a “desire to leave the residency program . ” (Participant 17. Male resident) Assertive communication Assertiveness plays a crucial role in building positive relationships in work environments. In critical incidents within this category, assertiveness was evident in active listening, empathy, and clarity in message delivery. These skills contributed to establishing channels of communication that facilitated the resolution of situations. “Maintaining this openness prevented the development of situations that could have negatively impacted academic progress during residency” (Participant 5. Male resident). In critical incidents, the willingness of professors to engage in dialogue and resolve conflicts positively influenced residents’ learning of clinical and communication skills. Assertiveness was crucial in physician-patient interactions, and in providing higher-quality medical services. One participant stated: “The physician reprimanded us in front of the patient and their family when we didn ’ t know something, creating distrust in them… I approached him and talked about the doctor-patient relationship and how trust influenced it… and how these actions could harm the institution ’ s image and the services we provided. The physician understood my point very well, and since that day, he has been correcting us privately.” (Participant 33. Female resident) Assertive communication in critical incidents helped avoid misunderstandings caused by assumptions. Clearly describing ideas, active listening without prejudice, and efficient message reception were facilitated, promoting better performance: “I believe that knowing how to listen to the person approaching for support and waiting for them to talk about their issues makes communication flow. It allows us to assess what affects them and provide good guidance on decisions according to the problem.” (Participant 24. Female professor) Within critical incidents narrated by professors, personal situations affecting the learning of some residents were mentioned. Active, sensitive, and empathetic listening skills helped professors seek joint solutions for the benefit of affected residents: “The residents were complaining about a colleague, saying they didn ’ t want to be with her because she was too slow to work. After observing her, I noticed she had vision problems… I approached her to talk about how I could help her, and she said she didn ’ t have money to buy glasses… I bought them for her and talked to some colleagues to form a study group to support her… After two months of having her glasses, the comments about her performance were positive.” (Participant 2. Male professor) Detecting situations emotionally impacting students and being empathetic allowed professors “to establish more direct communication to understand problems and seek personalized solutions . ” (Participant 53. Female professor) Organizational communication Organizational communication is crucial during medical residency since formats, notices, and administrative processes are part of daily life in healthcare services. Critical incidents expressed problems in communication for scheduling academic activities and aspects related to research. Regarding the latter, several residents expressed difficulties in carrying out their thesis projects due to a need for clearer communication from authorities, and professors regarding institutional formats used to register the protocol with the ethics committees, progress delivery times, and thesis registration with relevant administrative bodies. Lack of communication, and the absence of institutional guidelines led to delays in developing their research projects, impacting knowledge dissemination. Some residents expressed organizational communication problems in this regard: “I was asked to present a research poster for a conference without prior information or specifications, and with less than 24 hours ’ notice; these generated multiple problems… the organization and communication were deficient… it seemed to me that they just wanted to fill spaces in that forum impromptu… it was a disaster in all aspects, and I gained no insights.” (Participant 11. Male resident) Another common issue in critical incidents was the lack of communication regarding the scheduling of rotations, leading to conflicts, and affecting residents’ learning: “A student was sent to a rotation in the endocrinology service but was rejected by the attending physician, claiming that he didn ’ t know anything about the resident, the site, the coordinator, or anyone… The physician said that he was tired, and he didn’t want more students… The resident lost her rotation in the endocrinology service.” (Participant 44. Female professor) Lack of communication was a frequently recurring event, particularly during vacation and holiday periods. There was no information about the absence of professors, and residents resulted in conflicts affecting interpersonal relationships, as well as academic, and service activities. “I attended the class scheduled on the calendar… when I arrived, the classroom was empty, no students… I contacted the chief resident of the group, and he informed me that they did not attend because they went to a practice at a teaching center and ’forgot’ to pass the memo.” (Participant 16. Male professor) The lack of updating administrative procedures or changes in guidelines for specific processes was a common scenario. Some residents stated that neither professors nor attending physicians properly communicated procedures to patients and were not updated in rules and guidelines. This affected medical care, since miscommunicating the administrative requirements for referrals and service validity affected the authorization of medicines; “this could lead to patients going home with incomplete or no doses of medications . ” (Participant 21. Female resident) Effective communication Effective communication involves the transmission of information in a clear manner, considering the needs of the interlocutor and their context. Some critical incidents highlighted the importance of feedback as an integral part of effective communication in medical education. Participants’ narratives revealed problems caused by delivery relevant dates, and grades, particularly “injustices in grading , even when assessment criteria had been communicated beforehand , leading to dissatisfaction and distance in the [teacher-student] relationship . ” (Participant 19. Female professor) In some critical incidents, the need for professors and residents to understand each other’s perspectives and context was evident. The absence of these aspects led to misunderstandings and tensions in various scenarios. The importance of keeping records and evidence of agreements to make communication more effective was mentioned; this helped prevent conflicts by providing a solid foundation for communication” (Participant 58. Male professor). In some narratives, communication seemed forceful, and ineffective, resulting in unnecessary tensions. “Due to the absence of a teacher, residents had agreements to miss their rotations… I talked to them and informed that they had to comply the established schedule because, from the beginning, they accepted the residency and agreed to follow the schedules. The residents felt attacked, arguing that they were being asked for too much… Due to my communication, which was very energetic, they took it personally.” (Participant 62. Male professor) In various critical incidents, it was observed that when regular communication channels were ineffective, they escalated to more tense, and confrontational situations with little chance of future success; this affected teaching work, and made the interaction between teachers and students less enjoyable, negatively impacting the residency environment: “A resident did not fulfill her academic responsibilities, despite verbal recommendations about the quality of her performance… when I did not see changes, I started drafting academic warnings… The student never told me she had some disabilities (hearing, visual, and motor)… Months later, the student wrote a letter to the institutional authorities requesting a review of grades, arguing discrimination… as a professor, I never established an open communication channel where I asked for her opinion about the situation or if she had any problem preventing her from completing the specialty…” (Participant 23. Female professor) Effective communication was identified as being able to prevent further problems, even in situations that were harsh and unpleasant for the participants. Showing previous evidence of agreements, allowing those involved to communicate with respect, and presenting solid and reasonable arguments about the course of certain events meant acknowledging mistakes and the possibility of rectifying them, regardless of whether the participants liked it or not. “On one occasion, we had not submitted our theses on time. The professor, calmly, talked to us about the agreement we had made at the beginning of the school year, where we committed to ourselves to hand in our theses according to the schedule… He showed us the evidence, a document with our signatures that made it clear that we knew when we had to submit our work. At first, we were defensive, but the professor explained to us the importance of agreements… Eventually, we understood that the responsibility was ours…” (Participant 32. Female resident) Asymmetric communication refers to situations where there is an inequality in power, information, or influence among the parties involved in message transmission. This was prevalent in many identified critical incidents, negatively impacting patient care. Participants reported instances where their medical judgment was doubted by higher-ranking healthcare personnel, leading to delays in patient care. For example: “In an ongoing medical care shift, I presented a patient to the intermediate emergency area, and they refused to accept her because I was a resident. Despite explaining and justifying why the patient needed to be in that area, they refused to listen, and insisted that the attending physician had to give that indication… This significantly delayed patient care.” (Participant 28. Male resident) Communication issues in this context even jeopardized patient safety, as reported by a resident: “During my psychiatry rotation, I noticed that the treatments given by the attending physician were not consistent with the literature, and harming patients… I found a patient with signs of acute coronary ischemia, and three patients had hypertensive crises. I told the physician… he did not give the necessary attention because, for him, I was just a resident lacking experience.” (Participant 01. Female resident) Asymmetric communication also impacted interactions among residents based on their rank. Inappropriate use of hierarchy justified abuses, creating uncertainty, fostering complicated interactions, and affecting the work environment. “Verbal expressions of superiority by residents of higher grades affected group performance , and work environment . ” (Participant 30. Female professor) Within this category, critical incidents were identified where residents explicitly mentioned being subjected to degrading comments, belittlement, verbal abuse, shouting, mockery, rudeness, and intimidation by professors, attending physicians from other services, or residents of higher grades. “For instance, the professor blocked me from ’WhatsApp,’ claiming, ’It ’ s my phone, and I do what I want.’ However, she gives instructions through that media… The physician not only has poor communication with some of us, but she is rude, and threatens to fail me… She laughs at me… I sent her several documents, and she refused to accept them, saying ’I do what I want’…” (Participant 44. Female resident) In describing these critical incidents, residents expressed feeling humiliated, belittled, and insecure, even expressing a “desire to leave the residency program . ” (Participant 17. Male resident) Assertiveness plays a crucial role in building positive relationships in work environments. In critical incidents within this category, assertiveness was evident in active listening, empathy, and clarity in message delivery. These skills contributed to establishing channels of communication that facilitated the resolution of situations. “Maintaining this openness prevented the development of situations that could have negatively impacted academic progress during residency” (Participant 5. Male resident). In critical incidents, the willingness of professors to engage in dialogue and resolve conflicts positively influenced residents’ learning of clinical and communication skills. Assertiveness was crucial in physician-patient interactions, and in providing higher-quality medical services. One participant stated: “The physician reprimanded us in front of the patient and their family when we didn ’ t know something, creating distrust in them… I approached him and talked about the doctor-patient relationship and how trust influenced it… and how these actions could harm the institution ’ s image and the services we provided. The physician understood my point very well, and since that day, he has been correcting us privately.” (Participant 33. Female resident) Assertive communication in critical incidents helped avoid misunderstandings caused by assumptions. Clearly describing ideas, active listening without prejudice, and efficient message reception were facilitated, promoting better performance: “I believe that knowing how to listen to the person approaching for support and waiting for them to talk about their issues makes communication flow. It allows us to assess what affects them and provide good guidance on decisions according to the problem.” (Participant 24. Female professor) Within critical incidents narrated by professors, personal situations affecting the learning of some residents were mentioned. Active, sensitive, and empathetic listening skills helped professors seek joint solutions for the benefit of affected residents: “The residents were complaining about a colleague, saying they didn ’ t want to be with her because she was too slow to work. After observing her, I noticed she had vision problems… I approached her to talk about how I could help her, and she said she didn ’ t have money to buy glasses… I bought them for her and talked to some colleagues to form a study group to support her… After two months of having her glasses, the comments about her performance were positive.” (Participant 2. Male professor) Detecting situations emotionally impacting students and being empathetic allowed professors “to establish more direct communication to understand problems and seek personalized solutions . ” (Participant 53. Female professor) Organizational communication is crucial during medical residency since formats, notices, and administrative processes are part of daily life in healthcare services. Critical incidents expressed problems in communication for scheduling academic activities and aspects related to research. Regarding the latter, several residents expressed difficulties in carrying out their thesis projects due to a need for clearer communication from authorities, and professors regarding institutional formats used to register the protocol with the ethics committees, progress delivery times, and thesis registration with relevant administrative bodies. Lack of communication, and the absence of institutional guidelines led to delays in developing their research projects, impacting knowledge dissemination. Some residents expressed organizational communication problems in this regard: “I was asked to present a research poster for a conference without prior information or specifications, and with less than 24 hours ’ notice; these generated multiple problems… the organization and communication were deficient… it seemed to me that they just wanted to fill spaces in that forum impromptu… it was a disaster in all aspects, and I gained no insights.” (Participant 11. Male resident) Another common issue in critical incidents was the lack of communication regarding the scheduling of rotations, leading to conflicts, and affecting residents’ learning: “A student was sent to a rotation in the endocrinology service but was rejected by the attending physician, claiming that he didn ’ t know anything about the resident, the site, the coordinator, or anyone… The physician said that he was tired, and he didn’t want more students… The resident lost her rotation in the endocrinology service.” (Participant 44. Female professor) Lack of communication was a frequently recurring event, particularly during vacation and holiday periods. There was no information about the absence of professors, and residents resulted in conflicts affecting interpersonal relationships, as well as academic, and service activities. “I attended the class scheduled on the calendar… when I arrived, the classroom was empty, no students… I contacted the chief resident of the group, and he informed me that they did not attend because they went to a practice at a teaching center and ’forgot’ to pass the memo.” (Participant 16. Male professor) The lack of updating administrative procedures or changes in guidelines for specific processes was a common scenario. Some residents stated that neither professors nor attending physicians properly communicated procedures to patients and were not updated in rules and guidelines. This affected medical care, since miscommunicating the administrative requirements for referrals and service validity affected the authorization of medicines; “this could lead to patients going home with incomplete or no doses of medications . ” (Participant 21. Female resident) Effective communication involves the transmission of information in a clear manner, considering the needs of the interlocutor and their context. Some critical incidents highlighted the importance of feedback as an integral part of effective communication in medical education. Participants’ narratives revealed problems caused by delivery relevant dates, and grades, particularly “injustices in grading , even when assessment criteria had been communicated beforehand , leading to dissatisfaction and distance in the [teacher-student] relationship . ” (Participant 19. Female professor) In some critical incidents, the need for professors and residents to understand each other’s perspectives and context was evident. The absence of these aspects led to misunderstandings and tensions in various scenarios. The importance of keeping records and evidence of agreements to make communication more effective was mentioned; this helped prevent conflicts by providing a solid foundation for communication” (Participant 58. Male professor). In some narratives, communication seemed forceful, and ineffective, resulting in unnecessary tensions. “Due to the absence of a teacher, residents had agreements to miss their rotations… I talked to them and informed that they had to comply the established schedule because, from the beginning, they accepted the residency and agreed to follow the schedules. The residents felt attacked, arguing that they were being asked for too much… Due to my communication, which was very energetic, they took it personally.” (Participant 62. Male professor) In various critical incidents, it was observed that when regular communication channels were ineffective, they escalated to more tense, and confrontational situations with little chance of future success; this affected teaching work, and made the interaction between teachers and students less enjoyable, negatively impacting the residency environment: “A resident did not fulfill her academic responsibilities, despite verbal recommendations about the quality of her performance… when I did not see changes, I started drafting academic warnings… The student never told me she had some disabilities (hearing, visual, and motor)… Months later, the student wrote a letter to the institutional authorities requesting a review of grades, arguing discrimination… as a professor, I never established an open communication channel where I asked for her opinion about the situation or if she had any problem preventing her from completing the specialty…” (Participant 23. Female professor) Effective communication was identified as being able to prevent further problems, even in situations that were harsh and unpleasant for the participants. Showing previous evidence of agreements, allowing those involved to communicate with respect, and presenting solid and reasonable arguments about the course of certain events meant acknowledging mistakes and the possibility of rectifying them, regardless of whether the participants liked it or not. “On one occasion, we had not submitted our theses on time. The professor, calmly, talked to us about the agreement we had made at the beginning of the school year, where we committed to ourselves to hand in our theses according to the schedule… He showed us the evidence, a document with our signatures that made it clear that we knew when we had to submit our work. At first, we were defensive, but the professor explained to us the importance of agreements… Eventually, we understood that the responsibility was ours…” (Participant 32. Female resident) The communication problems analyzed describe incidents during family medicine residency. Asymmetric communication emerged as a critical element in the context of the teacher-student relationship. This dynamic was reflected in several incidents where communication problems based on power imbalances, and hierarchies. Hierarchy dynamics are a problem due to their lack of objectivity in communication by diminishing the importance of the message and focusing on situations that seek to demonstrate or ensure who can give orders and who must follow them, without considering the situation, and its context. This affects confidence in the residents, as well as their clinical judgment and their involvement in solving problems involving their professional training; it has been reported that, on various occasions, students may choose to remain silent because of their academic hierarchy, regardless of the relevance of their comments . The existence of hierarchies in healthcare institutions is recognized as a triggering factor for unequal communication among professionals of different levels . Disparities in power can lead to poor communication and generate a hostile work environment. Medical residency is a comprehensive formative process in which students build and consolidate their ethical, and medical identity. The humiliation, belittlement, and verbal aggression mentioned by participants have a profound impact on dehumanizing doctors as professionals. In addition, it has been pointed out that the stress, to which students are subjected, negatively affects their learning and performance, contributing significantly to the emergence of mood disorders and suicidal ideation . The described critical incidents align with findings from other studies indicating that the abuse of power in communication can erode trust, and collaboration in medical training environments . Communication is essential in medical education, particularly in family medicine residency programs, as it plays a critical role in developing both clinical and interpersonal skills . Our study identified several key areas where communication issues were prevalent. Addressing these issues requires the implementation of structured communication training programs. These programs should be embedded into the medical curriculum and form part of residents’ and professors’ ongoing professional development. Moreover, it is essential for professors to engage in continuous education to improve their communication skills. Regular workshops and seminars focusing on communication strategies can significantly improve resident interactions, fostering a culture of respect and collaboration . These educational efforts should include training in conflict resolution and feedback delivery, as our study identified these as major areas of concern. It is noteworthy to highlight the importance of institutional support in creating clear and efficient communication channels, emphasizing empathy and mutual respect. Establishing standardized procedures for internal communication can prevent misunderstandings and ensure that important information is effectively disseminated. Patient safety was significantly affected by problems of asymmetric communication. In such incidents, the figure of residents was nullified or minimized due to their academic status. Several studies have shown that open and effective communication among healthcare professionals is essential for making appropriate decisions, and preventing medical errors . In the context of medical education, assertiveness is relevant since it is a skill that enables the establishment of positive relationships with colleagues and patients, facilitating the transmission of relevant medical information, whether technical or not. Moreover, it promotes management of difficult situations, informed decision making, managing patients, and enhancing teamwork performance in medical contexts . Regarding the reported critical incidents, some narratives highlight the lack of clarity in the message, along with an absence of empathy. and poor communication skills for problem resolution, which affects learning and interaction due to the lack of trust generated in the residents. Within the reported critical incidents, a variety of situations were identified where assertiveness played a crucial role, not only in solving issues related to the learning and training of medical specialists but also highlighting the need for assertiveness to address problems in medical education. Developing empathetic, assertive, and adequately communicative family physicians is imperative for the construction of the physician-patient relationship, as this relates to the ability to respect the patient and their family, fostering prosocial behaviors . Assertiveness is a highly desirable skill in family physicians, as well as in other healthcare professionals. As identified in critical incidents, assertive communication can prevent escalating situations that could lead to a "point of no return," where both, learning and well-being of teachers, students, and patients may be affected . Another element identified in our results was problems in organizational communication. These were related to academic and service aspects. The lack of notice communication, clarity in scheduling activities during residency, and the misuse of institutional communication channels affected the development of research, teaching, and medical care activities. The loss of rotations across different services due to organizational communication problems implies incomplete training in various knowledge areas crucial in-patient care. It also affects the creation of potential networks with other complementary medical disciplines to family medicine . In this regard, it has been noted that one of the main obstacles to be able to practice as competent, and professional physicians is the lack of acquisition of skills during their training. This is exacerbated in educational environments that prefer medical care over comprehensive professional training . Examples of poor organizational communication in other medical and service contexts highlight the impact on processes, the dilution of responsibilities, the deterioration of medical care, an increase in medical complaints, a decrease in patient safety, and more significant expenditure of resources . Some of these problems were shared in this research and caused friction within the residency. Therefore, addressing such communication problems is necessary to make institutional processes more efficient, which is crucial in medical education settings and in providing quality services . The transmission of clear and direct information, as well as the creation of a environment of support and acceptance, is essential for establishing an effective dialogue where conflict resolution is vital for medical practice . When analyzing critical incidents related to effective communication, the need to transmit information clearly, objectively, and in a personalized manner becomes evident. Several situations involving this type of communication were related to evaluation aspects, where residents expressed dissatisfaction with grading methods, and a lack of follow-up of their obligations in the residency, despite previous agreements. In the analyzed critical incidents, it was possible to identify that when communication was not effective, the deterioration in the teacher-student relationship was irrevocable. This affected not only interpersonal relationships but also the overall residency environment. On the other hand, it was identified that effective communication was backed by physical evidence of previous agreements. This, even when there was resistance from some involved parties, prevented incidents from escalating to major confrontations, and encouraged the assumption of responsibilities. In this regard, it is essential to note that communication is a skill valued by patients when evaluating the medical care received. In Mexico, it has been estimated that 3 out of 4 complaints against physicians, and health institutions are associated with ineffective communication . This challenge is not unique to this country, which has led to proposals for developing structured training in communication skills that can facilitate patient-centered care . Additionally, it has been identified that educational strategies in communication during medical residency not only improve residents’ interactions with patients but also enhance the effectiveness with which they understand each other, illustrating the direct benefits of effective communication in mutual understanding . These observations highlight the need for a reference on approaches to teaching effective communication skills, which are important to mitigating the adverse effects of poor communication during medical residency . Parallel to the above, effective leadership qualities have been noted to promote open communication spaces, and a positive team dynamic to provide solutions, improve the ability to adapt to change, foster mutual respect, and cultivate a more favorable working environment for healthcare team members . Communication is important in medical residencies and translates into patient-centered care and improved medical outcomes . To address the communication problems identified in this study, an approach centered on open communication, and the creation of a culture of psychological safety is recommended. This would create an environment where professionals feel comfortable expressing their concerns, and opinions without repercussions . It has been emphasized that effective communication skills should be included in the curricula of medical specialties, as communication efficiency and effectiveness can be improved through training . However, compassionate, and empathetic bidirectional human communication between physician and patient is learned primarily from the actions of faculty. Higher-ranking physicians must be careful not to inhibit or affect communication channels that undermine the alterity of their peers, as these attitudes can quickly close such channels, eliminating any opportunity for learning . Failure to promote communication skills in curricula, many negative communication behaviors, learned or emulated in medical training environments, may be perpetuated especially in medical residencies. Due to the nature of critical incidents, where emphasis is placed on significant events that occurred in the past, there may have been recall biases among participants, as well as a lack of broader and more detailed description of incidents; this limitation could be addressed through other qualitative approaches . Likewise, as the collectors of critical incidents were family medicine professors who have continuous contact with all participants, residents may have omitted information due to fears or inhibitions; this could explain their low participation (23.36%), potentially generating biases due to the nature of the applicator . Additionally, the limited time available due to the educational and service demands of the residency, as well as the possibility that participants may have had other priorities, could have influenced their willingness to participate in this research, this same scenario could have occurred with professors. Communication problems impacted various family medicine residency scenarios, including teaching-learning processes, work environments, coexistence, and medical care. If this scenario is not addressed, communication problems will continue to generate obstacles in the professional training of residents, which can negatively affect medical care and interactions with other healthcare professionals. Research on the impact of communication during medical residencies is fundamental for restructuring and refining curricula. This allows for evaluating residents’ performance in technical and diagnostic abilities and essential skills for their daily practice. Such evaluation is key to identifying and correcting asymmetries during medical training. Medical education must incorporate content that addresses communication processes and the development of soft skills. These aspects can significantly improve specialists’ empathy and social abilities, allowing them to interact more effectively with colleagues, patients, and their families. Furthermore, it is important that the strengthening of communication skills continues beyond initial training. Continuing education must include updating and improvement programs in these areas, ensuring that physicians maintain and enhance their skills throughout their professional practice. |
Clinicopathological features and prognostic significance of TAF1L in gastric cancer | c62595e4-02f7-44a3-ac7d-cc4c80b8780d | 11613484 | Anatomy[mh] | Gastric cancer (GC) is a common malignancy globally, and a serious threat to human health . With the continuous improvement of diagnosis and treatment techniques, the comprehensive treatment level of gastric cancer has made some progress, but the overall efficacy is still less than satisfactory, and new therapeutic targets need to be further explored . TAF1 (TATA-box binding proteinassociated factor 1) which is also called TAF(II)250, is located on the X chromosome (Xq13.1), and encodes TATA box binding protein-associated factor 1 protein as a scaffold for TF II D which is involved in the transcription process of numerous genes in eukaryotic cells . As a homologue of TAF1, TAF1L (TATA-box binding protein associated factor 1 like) has been found that presents a similar function with histone acetyltransferase activity like TAF1 , but it also varies in individual fields. Previous studies had proved that TAF1/TAF1L played important roles in different kinds of tumor progression . There were also several studies mentioned the potential role of TAF1/TAF1L in GC . Until now, there was no study explored the relationship of TAF1L expression with clinicopathological features and prognosis of GC. In this study, we evaluated the expression of TAF1L in clinical samples by immunohistochemistry and aims to investigate the clinical significance of TAF1L in GC and to explore a new potential biomarker for evaluating treatment and prognosis. Patient inclusion In this study, we screened patients with inclusion and exclusion criteria. Therefore, 120 patients met the inclusion criteria, as well as 30 paired para-cancerous/normal samples of the enrollment. Inclusion criteria: (1) histologically confirmed gastric or oesophagogastric junction adenocarcinoma; (2) receive surgical treatment as initial treatment; (3) age 18–80 years; (4) Eastern Cooperative Oncology Group performance status (ECOG) 0–1 ; and (5) complete clinicopathologic data. Exclusion criteria: (1) with distant metastasis; (2) previous anti-tumor therapy such as chemotherapy and radiotherapy; (3) combined with other malignancies; and (4) missing data. Immunohistochemistry Immunohistochemistry (IHC) was carried out according to the manufacturer’s instructions (Leica Bond III, Germany). Four-micrometer-thick tissue sections were incubated with the primary rabbit anti-TAF1L antibody (1:250; 55170-1-AP, Proteintech, USA) for 15 min followed by the incubation with the secondary antibody for 8 min, DAB color development for 10 min. All IHC results were interpreted by two independent pathologists blinded to this study. In case of disagreement between the two pathologists, the result would be re-evaluated by a third one. The degrees of staining intensity were determined as: 0 (none), 1 (weak), 2 (moderate),3 (intense) and 4 (strongly intense). Percentages of positive cell were counted as 0 (0%), 1 (1–25%), 2 (26–50%), 3 (51–75%) and 4 (76–100%). The final staining score was calculated by multiplying the intensity of the positive signal by the percentage of positive cells, following the method of Remmele and Stegner (1987) . A score of < 4 was considered as “low expression” and a score ≥ 4 as “high expression”, and the categories were used for statistical analysis. To ascertain the mismatch repair (MMR) status, postoperative immunohistochemical analysis was conducted on four key MMR proteins: MLH1 (using Dako’s ES05 clone), MSH2 (with Dako’s FE11 clone), MSH6 (employing Dako’s EP49 clone), and PMS2 (via Dako’s EP51 clone). Loss of any of the four MMR proteins was defined as MMR deficiency (dMMR). Follow-up For enrolled patients, follow-up was performed once per 3 months in first 2 years, once per 6 months in 3 to 5 years and once yearly thereafter. The follow-up methods mainly included telephonic follow-ups and regular outpatient reexaminations. Overall survival (OS) was defined as the time from the date of pathological diagnosis of GC to the date of death or the most recent follow-up. The cutoff date for OS was December 31, 2023. Bioinformatics analysis TCGA dataset ( https://tcgadata.nci.nih.gov/tcga ) was used to gain the RNA-sequencing expression (level 3) profiles and clinicopathological information for stomach adenocarcinoma (STAD) cases. The differences in survival between the groups were compared by Log-rank test. The predictive accuracy of TAF1L mRNA was compared by the timeROC (v 0.4) analysis. R software package ggstatsplot and pheatmap were used to display the two-gene correlation map and the multi-gene correlation respectively. Spearman’s correlation analysis was used to describe the correlation between quantitative variable that without a normal distribution. The Genomics Analysis and Visualization Platform tool ( http://r2.amc.nl ) was used to preform KEGG (Kyoto Encyclopedia of Genes and Genomes) and GO (Gene Ontology) analyses. Statistical analysis SPSS 26.0 (IBM Corporation, Armonk, NY, USA) and R 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) were used for statistical analyses and p < 0.05 was indicated statistical significance. Student t-test or Chi-squared test was used to assess the between-group differences regarding to continuous or discrete variables, respectively. Survival analysis was performed by the Kaplan-Meier method with Log-rank test. Univariate and multivariate analyses based on Cox hazard regression models were used to evaluate the prognostic factors, Hazard ratio (HR) and 95% confidence interval (CI). Cox proportional hazards regression model was used for univariate and multivariate analysis to identify the risk factors affecting the survival status of patients and estimated hazard ratio (HR) and 95% confidence interval (95% CI). In this study, we screened patients with inclusion and exclusion criteria. Therefore, 120 patients met the inclusion criteria, as well as 30 paired para-cancerous/normal samples of the enrollment. Inclusion criteria: (1) histologically confirmed gastric or oesophagogastric junction adenocarcinoma; (2) receive surgical treatment as initial treatment; (3) age 18–80 years; (4) Eastern Cooperative Oncology Group performance status (ECOG) 0–1 ; and (5) complete clinicopathologic data. Exclusion criteria: (1) with distant metastasis; (2) previous anti-tumor therapy such as chemotherapy and radiotherapy; (3) combined with other malignancies; and (4) missing data. Immunohistochemistry (IHC) was carried out according to the manufacturer’s instructions (Leica Bond III, Germany). Four-micrometer-thick tissue sections were incubated with the primary rabbit anti-TAF1L antibody (1:250; 55170-1-AP, Proteintech, USA) for 15 min followed by the incubation with the secondary antibody for 8 min, DAB color development for 10 min. All IHC results were interpreted by two independent pathologists blinded to this study. In case of disagreement between the two pathologists, the result would be re-evaluated by a third one. The degrees of staining intensity were determined as: 0 (none), 1 (weak), 2 (moderate),3 (intense) and 4 (strongly intense). Percentages of positive cell were counted as 0 (0%), 1 (1–25%), 2 (26–50%), 3 (51–75%) and 4 (76–100%). The final staining score was calculated by multiplying the intensity of the positive signal by the percentage of positive cells, following the method of Remmele and Stegner (1987) . A score of < 4 was considered as “low expression” and a score ≥ 4 as “high expression”, and the categories were used for statistical analysis. To ascertain the mismatch repair (MMR) status, postoperative immunohistochemical analysis was conducted on four key MMR proteins: MLH1 (using Dako’s ES05 clone), MSH2 (with Dako’s FE11 clone), MSH6 (employing Dako’s EP49 clone), and PMS2 (via Dako’s EP51 clone). Loss of any of the four MMR proteins was defined as MMR deficiency (dMMR). For enrolled patients, follow-up was performed once per 3 months in first 2 years, once per 6 months in 3 to 5 years and once yearly thereafter. The follow-up methods mainly included telephonic follow-ups and regular outpatient reexaminations. Overall survival (OS) was defined as the time from the date of pathological diagnosis of GC to the date of death or the most recent follow-up. The cutoff date for OS was December 31, 2023. TCGA dataset ( https://tcgadata.nci.nih.gov/tcga ) was used to gain the RNA-sequencing expression (level 3) profiles and clinicopathological information for stomach adenocarcinoma (STAD) cases. The differences in survival between the groups were compared by Log-rank test. The predictive accuracy of TAF1L mRNA was compared by the timeROC (v 0.4) analysis. R software package ggstatsplot and pheatmap were used to display the two-gene correlation map and the multi-gene correlation respectively. Spearman’s correlation analysis was used to describe the correlation between quantitative variable that without a normal distribution. The Genomics Analysis and Visualization Platform tool ( http://r2.amc.nl ) was used to preform KEGG (Kyoto Encyclopedia of Genes and Genomes) and GO (Gene Ontology) analyses. SPSS 26.0 (IBM Corporation, Armonk, NY, USA) and R 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) were used for statistical analyses and p < 0.05 was indicated statistical significance. Student t-test or Chi-squared test was used to assess the between-group differences regarding to continuous or discrete variables, respectively. Survival analysis was performed by the Kaplan-Meier method with Log-rank test. Univariate and multivariate analyses based on Cox hazard regression models were used to evaluate the prognostic factors, Hazard ratio (HR) and 95% confidence interval (CI). Cox proportional hazards regression model was used for univariate and multivariate analysis to identify the risk factors affecting the survival status of patients and estimated hazard ratio (HR) and 95% confidence interval (95% CI). Baseline characteristics In this study, a total of 1053 GC patients in Zhejiang Cancer Hospital (Hangzhou, China) between January 1st, 2018 to December 31th, 2019 were screened, while 479 patients had distant metastasis, 261 patients had received preoperative chemotherapy, 96 patients concurrent with other primary tumors, 47 patients were either under 18 or over 80 years old, 29 patients had incomplete clinicopathologic data and 21 patients were loss to follow-up. Finally, 120 patients met the inclusion criteria. 83 (69.2%) were male and 37 (30.8%) were female, and the median age was 59 years (range 26–80). All the patients received surgical treatment as initial treatment and postoperative pathology showed 72 cases (60.0%) were poorly/undifferentiated adenocarcinoma. The number of stages I, II, III cases was 15 (12.5%), 33(27.5%), 72(60.0%), respectively. The details of patient characteristics were shown in Table . Clinicopathological features and immunohistochemical expression of TAF1L TAF1L expression was evaluated by IHC in surgical specimens, and the results showed that TAF1L was higher expressed on the tumor tissues than para-cancerous/normal tissues. (Fig. a, expression of TAF1L in tumor tissue; Fig. b, expression of TAF1L in para-cancerous/normal tissue; Fig. c Representative image of TAF1L IHC staining by scoring). According to the results of IHC, there were 55 GC patients (45.8%) in the high-expression group (TAF1L-H group) while 65 GC patients (54.2%) in the low-expression group (TAF1L-L group). TAF1L expressions were mainly correlated with tumor differentiation ( p = 0.046), signet-ring cells ( p = 0.043), dMMR status ( p = 0.011), lympho-vascular invasion ( p = 0.038) and neural invasion ( p = 0.005) as shown in Table . Survival analysis There was significant difference in OS between the TAF1L-H group and the TAF1L-L group (mean OS: 40.3 months vs. 51.8 months, p = 0.019; Fig. ). Furthermore, we analyzed the survival differences in subgroups according to the pathological features. The results showed that the TAF1L-H cases presented worse survival in HER2-positive GC (mean OS: 20.9 months vs. 51.2 months, p = 0.007, Fig. a) while no statistical difference in HER2-negative GC (mean OS: 43.6 months vs. 51.1 months, p = 0.168, Fig. b). As for mismatch repair (MMR) status, the survival of TAF1L-H group was significantly worse than that of TAF1L-L group in mismatch repair-proficient (pMMR) cases (mean OS: 38.8 months vs. 51.6 months, p = 0.006, Fig. c) but not in mismatch repair-deficient (dMMR) cases ( p = 0.724, Fig. d). Besides, TAF1L-H cases showed worse OS both in stage I/II diseases and stage III diseases compared to TAF1L-L cases, although there were no statistical differences ( p = 0.075 and 0.119, respectively). Prognostic factors analysis Univariate Cox regression showed that the expressions of TAF1L, tumor size, N stage, HER2 status were statistically significant as the prognostic risk factors. Multivariate analysis showed that the expression of TAF1L (HR = 2.044, 95%CI = 1.007–4.147, p = 0.048) and HER2 status (HR = 2.383, 95%CI = 1.087–5.222, p = 0.030) were independent prognostic risk factors (Table ). Moreover, in HER2-positive cases ( n = 18), TAF1L was the independent prognostic risk factor (HR = 6.736, 95%CI = 1.373–33.032, p = 0.019). In HER2-negative cases ( n = 102), however, TAF1L showed no significant relationship with the prognosis (HR = 1.718, 95%CI = 0.789–3.741, p = 0.173). Besides, HER2 status was also the independent prognostic risk factor in TAF1L-H group (HR = 4.832, 95%CI = 1.908–12.239, p = 0.001), but not in TAF1L-L group (HR = 1.023, 95%CI = 0.227–4.616, p = 0.977). Due to the limited case number, we were unable to analyze the role of TAF1L in the prognosis of dMMR cases. In the pMMR cases, TAF1L was also the independent prognostic risk factor (HR = 2.291, 95%CI = 1.126–4.660, p = 0.022). Bioinformatics analysis of TAF1L As TCGA-GC dataset results showed, the expression of TAF1L was significantly higher in tumors compared with the normal control tissues ( p < 0.001, Fig. a). Additionally, we further analyzed the relationship between TAF1L and some classical biomarkers associated with treatment. According to the results, TAF1L presented lower expression in microsatellite instability-high (MSI-H) GC than microsatellite-stable (MSS) GC ( p = 0.002). Moreover, we analyzed the expression relationship between TAF1L and the main four DNA mismatch repair (MMR) protein genes (MLH1, MSH2, MSH6, PMS2), and the results presented significant positive correlations ( p < 0.01). As for HER2 status, the gene correlation analysis suggested a significant correlation between ERBB2 (HER2) and TAF1L ( p = 0.002). Compared with HER2-negative cases, HER2-positive cases presented higher TAF1L expression ( p = 0.005). Survival analysis found that high expression of TAF1L cases showed worse OS which presented significant difference ( p < 0.001, Fig. b). Further analysis showed that TAF1L high expression tended to have worse survival both in MSI-H group (34.0 months vs. NE , p = 0.054, Fig. a) and MSS group (11.0 months vs. 35.0 months, p = 0.0046, Fig. b). In HER2-positive cases, TAF1L expression was negatively correlated with OS (24.0 months vs. 57.0 months, p = 0.0039, Fig. c), while no significant difference in HER2-negative ones (30.0 months vs. 39.0 months, p = 0.16, Fig. d). KEGG analysis revealed the genes showed highly correlation with TAF1L were mainly focused on biological processes such as p53 signaling pathway, mismatch repair, IL-17 signaling pathway, and cell cycle (Fig. a). GO analysis showed that the biological functions of TAF1L and its related genes mainly concentrated in biological processes such as organelle fission, cell cycle checkpoint, nuclear division, and DNA replication (Fig. b). These biological behaviors may participate in GC occurrence and progression. In this study, a total of 1053 GC patients in Zhejiang Cancer Hospital (Hangzhou, China) between January 1st, 2018 to December 31th, 2019 were screened, while 479 patients had distant metastasis, 261 patients had received preoperative chemotherapy, 96 patients concurrent with other primary tumors, 47 patients were either under 18 or over 80 years old, 29 patients had incomplete clinicopathologic data and 21 patients were loss to follow-up. Finally, 120 patients met the inclusion criteria. 83 (69.2%) were male and 37 (30.8%) were female, and the median age was 59 years (range 26–80). All the patients received surgical treatment as initial treatment and postoperative pathology showed 72 cases (60.0%) were poorly/undifferentiated adenocarcinoma. The number of stages I, II, III cases was 15 (12.5%), 33(27.5%), 72(60.0%), respectively. The details of patient characteristics were shown in Table . TAF1L expression was evaluated by IHC in surgical specimens, and the results showed that TAF1L was higher expressed on the tumor tissues than para-cancerous/normal tissues. (Fig. a, expression of TAF1L in tumor tissue; Fig. b, expression of TAF1L in para-cancerous/normal tissue; Fig. c Representative image of TAF1L IHC staining by scoring). According to the results of IHC, there were 55 GC patients (45.8%) in the high-expression group (TAF1L-H group) while 65 GC patients (54.2%) in the low-expression group (TAF1L-L group). TAF1L expressions were mainly correlated with tumor differentiation ( p = 0.046), signet-ring cells ( p = 0.043), dMMR status ( p = 0.011), lympho-vascular invasion ( p = 0.038) and neural invasion ( p = 0.005) as shown in Table . There was significant difference in OS between the TAF1L-H group and the TAF1L-L group (mean OS: 40.3 months vs. 51.8 months, p = 0.019; Fig. ). Furthermore, we analyzed the survival differences in subgroups according to the pathological features. The results showed that the TAF1L-H cases presented worse survival in HER2-positive GC (mean OS: 20.9 months vs. 51.2 months, p = 0.007, Fig. a) while no statistical difference in HER2-negative GC (mean OS: 43.6 months vs. 51.1 months, p = 0.168, Fig. b). As for mismatch repair (MMR) status, the survival of TAF1L-H group was significantly worse than that of TAF1L-L group in mismatch repair-proficient (pMMR) cases (mean OS: 38.8 months vs. 51.6 months, p = 0.006, Fig. c) but not in mismatch repair-deficient (dMMR) cases ( p = 0.724, Fig. d). Besides, TAF1L-H cases showed worse OS both in stage I/II diseases and stage III diseases compared to TAF1L-L cases, although there were no statistical differences ( p = 0.075 and 0.119, respectively). Univariate Cox regression showed that the expressions of TAF1L, tumor size, N stage, HER2 status were statistically significant as the prognostic risk factors. Multivariate analysis showed that the expression of TAF1L (HR = 2.044, 95%CI = 1.007–4.147, p = 0.048) and HER2 status (HR = 2.383, 95%CI = 1.087–5.222, p = 0.030) were independent prognostic risk factors (Table ). Moreover, in HER2-positive cases ( n = 18), TAF1L was the independent prognostic risk factor (HR = 6.736, 95%CI = 1.373–33.032, p = 0.019). In HER2-negative cases ( n = 102), however, TAF1L showed no significant relationship with the prognosis (HR = 1.718, 95%CI = 0.789–3.741, p = 0.173). Besides, HER2 status was also the independent prognostic risk factor in TAF1L-H group (HR = 4.832, 95%CI = 1.908–12.239, p = 0.001), but not in TAF1L-L group (HR = 1.023, 95%CI = 0.227–4.616, p = 0.977). Due to the limited case number, we were unable to analyze the role of TAF1L in the prognosis of dMMR cases. In the pMMR cases, TAF1L was also the independent prognostic risk factor (HR = 2.291, 95%CI = 1.126–4.660, p = 0.022). As TCGA-GC dataset results showed, the expression of TAF1L was significantly higher in tumors compared with the normal control tissues ( p < 0.001, Fig. a). Additionally, we further analyzed the relationship between TAF1L and some classical biomarkers associated with treatment. According to the results, TAF1L presented lower expression in microsatellite instability-high (MSI-H) GC than microsatellite-stable (MSS) GC ( p = 0.002). Moreover, we analyzed the expression relationship between TAF1L and the main four DNA mismatch repair (MMR) protein genes (MLH1, MSH2, MSH6, PMS2), and the results presented significant positive correlations ( p < 0.01). As for HER2 status, the gene correlation analysis suggested a significant correlation between ERBB2 (HER2) and TAF1L ( p = 0.002). Compared with HER2-negative cases, HER2-positive cases presented higher TAF1L expression ( p = 0.005). Survival analysis found that high expression of TAF1L cases showed worse OS which presented significant difference ( p < 0.001, Fig. b). Further analysis showed that TAF1L high expression tended to have worse survival both in MSI-H group (34.0 months vs. NE , p = 0.054, Fig. a) and MSS group (11.0 months vs. 35.0 months, p = 0.0046, Fig. b). In HER2-positive cases, TAF1L expression was negatively correlated with OS (24.0 months vs. 57.0 months, p = 0.0039, Fig. c), while no significant difference in HER2-negative ones (30.0 months vs. 39.0 months, p = 0.16, Fig. d). KEGG analysis revealed the genes showed highly correlation with TAF1L were mainly focused on biological processes such as p53 signaling pathway, mismatch repair, IL-17 signaling pathway, and cell cycle (Fig. a). GO analysis showed that the biological functions of TAF1L and its related genes mainly concentrated in biological processes such as organelle fission, cell cycle checkpoint, nuclear division, and DNA replication (Fig. b). These biological behaviors may participate in GC occurrence and progression. GC is characterized by high heterogeneity and a comprehensive molecular profile can greatly facilitate the evolution of treatment regimens and the improvement of precision in population selection. Researching GC biomarkers is of great significance to improve the treatment efficacy and prognosis. TAF1L was first found to present high expression in human testicular germ cells by P. Jeremy et al., and its similarity to TAF1 was greater than 94% , which mainly played a role in transcriptional regulation and histone acetyltransferase activity . Currently, only several studies revealed the potential role of TAF1/TAF1L in the occurrence and development of GC . However, its relationship with prognosis and clinicopathological features in GC remains unclear. Thus, this study aims to evaluate the probability of TAF1L as a new potential biomarker of treatment and prognosis evaluation in GC. In this study, we performed IHC to evaluate the TAF1L expression in GC tissues. According to IHC staining results, TAF1L expression in GC tissue sections was much stronger than those in normal/para-cancerous ones. Moreover, the expression of TAF1L were correlated with tumor differentiation, signet-ring cells, dMMR status, lympho-vascular invasion, and neural invasion in this cohort, which suggested that TAF1L is closely related to the occurrence and development of GC. Survival analysis showed that high expression of TAF1L presented worse survival, and the result of TCGA dataset also showed similar tendency. Besides, the multivariate analysis suggested the expression of TAF1L is one of the independent prognostic risk factors. These results revealed that the expression of TAF1L is related to the progression of GC and may act as a potential prognosis biomarker of GC. Previous study showed that TAF1 and TAF1L genes had mononucleotide repeats in the coding sequences that might be mutation targets in the cancers with MSI , which may promoting the tumorigenesis of MSI-H GC. MMR is critical for genome stability and dMMR can result in MSI phenotype , and the concordance between MSI-H status and dMMR was 97.6–99% . In our cohort, the TAF1L-H group presented higher dMMR proportion (14.5%, 8/55) than TAF1L-L group (1.5%, 1/65) and there was statistical difference ( p = 0.011). As the prevalence of dMMR/MSI-H was 6-10% in eastern cohorts , our results suggested that the higher expression of TAF1L may presented more frequency of dMMR status. However, the frameshift mutations would result in premature stops of amino acid synthesis in TAF1L protein , indicating that low expression of TAF1L is associated with MSI-H/dMMR status, which is contrary to our results. Thus, we further explore the relationship between TAF1L and MSI-H/dMMR status. According to the TCGA dataset, TAF1L presented lower expression in MSI-H cases ( p = 0.00413). Moreover, TAF1L presented significant positive correlations with MLH1, MSH2, MSH6 and PMS2 expression, suggesting that low expression of TAF1L might cause the deficiency of MMR protein expression and resulting in dMMR status. Although our conclusion was not consistent with the TCGA results due to the rarity of dMMR GC and the limited cases of our cohort, the expression of TAF1L did correlate with MSI-H/dMMR status and was expected to be confirmed by larger sample size studies in the future. Through whole-genome sequencing, Zhou et al. found that TAF1 was an important mutated driver gene in HER2-positive GC . Moreover, Cai et al. found that the co-expression of TAF1 and HER2 cases presented worse prognosis in endometrial clear cell carcinoma . These results indicated the potential relationship of TAF1 and HER2 in cancer progression. As TAF1L is an analogue of TAF1, we further analyzed the correlation of TAF1L and HER2 in GC. Survival analysis based on TCGA dataset showed that TAF1L expression was negatively correlated with OS in HER2-positive GC samples ( p = 0.0039), but not in HER2-negative ones ( p = 0.16). Moreover, the gene correlation also suggested a significant association between ERBB2 and TAF1L ( p = 0.002). In our cohort, the TAF1L-H cases showed worse survival in HER2-positive GC ( p = 0.007) and TAF1L was the independent prognostic risk factor in HER2 positive cases. However, there was no relationship between TAF1L expression and HER2 status ( p = 0.553) in this cohort, which may be caused by the limited case number. In conclusion, TAF1L might play an important role in HER2-positive GC progression and the co-expression of TAF1L and HER2 in GC may results in worse prognosis. Further studies are expected to explore the relationship and mechanisms of interaction between TAF1L and HER2 status in GC. As studies reported previously, immune-checkpoint inhibitors (ICIs) in dMMR/MSI-H GC had been well validated in several phase II and III studies . According to our KEGG analyze results, IL-17 signaling pathway was one of the top-20 enriched pathways related to TAF1L. Several studies had indicated the importance of IL-17 in cancer immunotherapy which involved in tumor microenvironment . Targeting the IL-17/TAF1L immune axis may become a new way improving immunotherapy efficacy. Moreover, targeted therapy such as trastuzumab has been wildly used in HER2-positive GC . As our results revealed the relationships between TAF1L together with MSI-H/dMMR and HER2 status, indicating that TAF1L may be associated with immunotherapy and targeted therapy efficacy. However, these were only inferences based on our results and did not conduct further analysis because of the limited cases. The correlation between TAF1L and the efficacy of different treatment regimens needs to be further studied. Our study also had some limitations. First, it was retrospective study in a single-center as the overall sample size was relatively small, and large-sample studies might be necessary to clarify our result. Second, although we performed preliminary IHC to evaluate the expression of TAF1L in clinical samples and validated by TCGA dataset, some results still need to be validated by wet-experiment and we would like to conduct in future study. In addition, because most of the surgical specimens of stage IV patients were after treatment (such as chemotherapy or immunotherapy), we did not include such patients for analysis, and we expected further studies to discuss the role of TAF1L in such patients in the future. Despite these, as far as we know, it is the first study focusing on TAF1L expression and revealing its potential relationship with clinicopathological features of GC, which is important for future exploration of new biomarker in therapy and prognosis. In conclusion, TAF1L presents high expression in GC tissues and is closely related to the occurrence and development of GC. Moreover, high expression of TAF1L is a marker of poor prognosis, especially in HER2 positive GC cases. We suppose that TAF1L might become a significant biomarker for predicting prognosis as well as a potential therapeutic biomarker in GC. However, further in vitro and in vivo experiments were needed to explore the mechanism of TAF1L acting on GC tumor progression. Overall, our findings provide a basis for understanding the function of TAF1L in GC, which will provide theoretical basis and new ideas for the target of gene detection, diagnosis, and treatment in the future. |
Pharmacogenomic analysis in adrenocortical carcinoma reveals genetic features associated with mitotane sensitivity and potential therapeutics | d6a11e47-e37d-47c6-8b39-e3254c8f6436 | 11109426 | Pharmacology[mh] | Adrenocortical carcinoma (ACC) is a rare but fatally aggressive endocrine malignancy with high risk of recurrence and dismal prognosis . However, therapeutic options for advanced ACC are limited. Mitotane, a derivative of insecticide dichlorodiphenyltrichloroethane (DDT) with adrenolytic properties, has been currently the only drug approved by the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA) for ACC . Mitotane alone or in combination with platinum-based chemotherapy is recommended as first-line therapy in the palliative setting for advanced and unresectable tumors as well as in adjuvant settings in patients at high risk of recurrence . Despite over five-decade application in clinics, treatment with mitotane remains challenging. Firstly, the dose-limiting toxicity and narrow therapeutic window of mitotane makes it a difficult drug to manage and requires personalized dosing regimen, partially due to its exceedingly poor aqueous solubility and low bioavailability . Secondly, the action of mitotane is not immediate but latent, with time needed to attain target plasma concentrations during which disease progression may precede . Moreover, the response spectrum to mitotane differs between patients and the response rates were between 10% and 35% . Additionally, since mitotane is a strong inducer of CYP3A4 with long-lasting effect, drug interactions with mitotane pose another issue . Lastly, adverse effects including gastrointestinal, central nervous system, endocrine and hepatic toxicity would limit its tolerability and even lead to the discontinuation of treatment . Therefore, identifying markers to predict response to mitotane is of remarkable importance to spare unfavorable drug toxicity, time window for other treatments, and costs as well. Efforts on determining predictive markers for mitotane response have long been made. To date, mitotane plasma levels within the target range of 14 to 20 mg/L is considered the strongest predictor of mitotane effectiveness. Plasma mitotane level above 14 mg/L was significantly associated with improved tumor response and survival . As for molecular predictors, germline CYP2W1 *6 single nucleotide polymorphism was associated with a reduced probability to reach target concentration and lower response rates, whereas CYP2B6 *6 correlated with higher mitotane levels . Other potential predictive factors include those implying mitotane action and its potential target (e.g., SOAT1) . Theoretically, treatment response is also dictated by the intrinsic molecular state of tumors before drug exposure . The first study on assessing the direct effects of mitotane in a large series of primary human ACC cultures has found the efficacy of mitotane was highly variable and RRM1 , SOAT1 as well as CYP2W1 expression levels were not predictive for mitotane sensitivity in vitro . Hence, identifying molecular features to indicate mitotane response is urgent. In combination with mitotane, cytotoxic chemotherapy including etoposide, doxorubicin, cisplatin (EDP-M) is recommended in first-line settings . EDP-M regimen prolonged progression-free survival to five months but failed to improve the overall survival . Nevertheless, adverse events from chemotherapy are common and diverse . Thus, seeking novel therapeutic strategies is urgently needed. In this study, we conducted in vitro mitotane sensitivity testing to evaluate direct antitumor activity in patient-derived ACC cells (PDC) obtained from 17 patients in an attempt to distinguish the therapeutic response of mitotane through a rapid in vitro assay. Further, we performed genomic and transcriptomic study in order to dissect molecular profiling of mitotane responders and non-responders, aiming to identify molecular biomarkers associated with individual response to mitotane. Additionally, high-throughput screening (HTS) against 40 compounds was conducted in an effort to explore other potential agents. Patients and sample collection Fresh primary ACC tissues were obtained from patients upon resection or biopsy at Ruijin Hospital between September 2020 to July 2023. The ACC diagnosis was confirmed by experienced pathologists, and steroidogenic factor 1 (SF1) immunostaining was performed to confirm its adrenal cortex origin. Clinicopathological information including age, sex, ENSAT stage, Ki67 index and hormonal secretion status, systemic therapies received prior to surgery or biopsy was recorded and analyzed. Hormonal secretion status was evaluated using biochemical testing of serum steroid hormone levels (e.g., cortisol, aldosterone and androgens) and 1mg dexamethasone suppression test. Informed consent was obtained from all patients, and this study was approved by local ethics committee of Ruijin Hospital (Approval number: KY320). Upon surgical or biopsy removal, pieces of tumors were fixed in formalin and paraffin-embedded for pathological diagnosis. For primary cell cultures, tumor tissues were placed in Tissue Storage Solution (Miltenyi Biotec, Cat No.130-100-008). Additional tissues were immediately snap-frozen in liquid nitrogen for later use. The overview of the tissue process pipeline was summarized in . Dissociation and short-term culture of PDCs Immediately after surgery or biopsy, tumor tissues were collected in the Tissue Storage Solution (Miltenyi Biotec, Cat No.130-100-008) and transported to the laboratory on ice and isolated within 24 hours. Tumor tissues were rinsed with Hanks’ Balanced Salt Solution (HBSS; Gibco, Cat No.14175095), minced and digested with 2.0 mg/mL collagenase II (Gibco, Cat No. 17101015), 0.02 mg/mL DNase (Roche, Cat No. 11284932001) at 37°C on a shaker for up to 2 hours. Then this suspension was filtered through a 70-µm cell strainer (Falcon, Cat No.352350). After depletion of red blood cells using Red Blood Cell Lysis Buffer (Invitrogen, Cat No.00-4333-57), trypan blue staining (Gibco, Cat No.15250061) was performed for cell counting and viability assessment. After cell preparation, one portion was plated directly into 96-well plates (Corning, Cat No.3799) for mitotane sensitivity testing, whilst a small number of cells was plated in chamber slides (Millipore, Cat No. PEZGS0816) or CellCarrier Ultra plates (PerkinElmer, Cat No.6055300) for immunofluorescence staining of adrenal cortex marker, SF1 (Proteintech, Cat No.18658-1-AP). When cell amount was abundant, cells would be plated in 384-well plate for HTS and cryopreserved in liquid nitrogen for later use. Cells then were cultured in DMEM/F-12 (Gibco, Cat No. 11320033) medium supplemented with 10% fetal bovine serum (FBS; Gibco, Cat No. 10099141), 1% Penicillin-Streptomycin (10,000 U/mL; Gibco, Cat No. 15140-122) and 1% L-glutamine (200 mM; Gibco, Cat No. 25030-081). In vitro mitotane sensitivity testing Mitotane (MedChemExpress, Cat No. HY-13690) was dissolved in dimethyl sulfoxide (DMSO, Sigma, Cat No. D2650) to a concentration of 100mM as stock solution, aliquoted and stored at -80°C. For in vitro experiments, the final concentration of DMSO was ≤0.1%. Primary cells were plated in a 96-well plate at a density of 1.0x10^4 cells/well in triplicates and treated with mitotane (1.0μM-100μM) for 72 hours and cell viability was assessed by the Cell Counting Kit-8 (CCK-8) assay kit (Dojindo, Cat No. CK04). Mitotane-sensitive ACC cell line H295R (ATCC ® CRL2128™) was used as the positive control. Dose-response curves, inhibition rate, half maximal inhibitory concentration (IC50) values and area under dose response curve (AUC) were calculated in Prism8.3 software (GraphPad). PDCs were arbitrarily classified as non-responders when the inhibitory effect on cell viability was less than 33% at the concentration of mitotane corresponding to the therapeutic circulating plasma concentration (14 mg/L, 50 µM) according to the previous study . Immunofluorescence staining Briefly, cells were fixed with 4% paraformaldehyde for 15 min at room temperature, and then washed twice with PBS buffer (Sangon Biotech, Cat No. B548117), followed by permeabilization with 0.1% Triton X-100 (Sigma-Aldrich, Cat No. 9036-19-5) for 15min. Next, cells were washed twice with PBS and blocked using antibody diluent (DAKO, Cat No. s3022) for 1h at room temperature. Later, cells were incubated with primary antibody against SF1 (1:100, Proteintech, Cat No.18658-1-AP) at 4°C overnight, followed by YSFluor 594-conjugated secondary antibodies (1:500, Yeasen Biotechnology, Cat No. 34212ES60). Nuclei were stained with 4,6-diamidino-2 phenylindole (DAPI) and wells were mounted using DAPI Fluoromount-G (SouthernBiotech, Cat No. 0100-20). High-throughput screening Cells were plated in 384-well plates (PerkinElmer, Cat No.6007680) at a density of 2000 cells per well in 50μl total volume. HTS was performed in an automated Cell::explorer HTS pro Platform (PerkinElmer). 24 hours after seeding, cells were treated with test compounds using a robot plate::handler equipped with a pintool dispensing device (PerkinElmer) for 6 days. HTS was conducted in single with four concentrations for each compound. DMSO was used as the vehicle control. Cell viability was determined using CellTiter-Glo reagent (Promega, Cat No. G7572) and luminescence was measured on an EnVision multimode plate reader (PerkinElmer). Dose-response data were analyzed. IC50 and AUC were calculated in Prism8.3 software (GraphPad). DNA and RNA extraction Genomic DNA and total RNA were extracted from snap-frozen tumor tissues or patient-derived primary cell pellets using the AllPrep DNA/RNA Micro Kit (Qiagen, Cat No. 80284) according to the manufacturer’s instructions. DNA extraction from peripheral blood leukocytes was carried out using the QIAamp DNA Mini Kit (Qiagen, Cat No. 51304). DNA and RNA concentrations were evaluated on Qubit Fluorometer (Thermo Fisher Scientific). Whole exome sequencing WES was performed on the tumor DNA and matched blood DNA. Briefly, genomic DNA of tumor and paired peripheral blood samples from 9 patients was randomly sheared through ultra-sonication to generate paired-end libraries with an average insert size of ~300 bp. Exome regions were captured by the xGen Exome Hyb Panel v2 kit (Integrated DNA Technologies, Cat No. 10005153), and sequencing was performed on Illumina Novaseq 6000 platform (Illumina, San Diego, CA, USA) with 150 bp paired end strategy. Identification of somatic mutations The paired-end reads from WES were mapped to human reference genome (hg19) by BWA aligner (v0.7.17) . Mapping results were then sorted and marked for duplications via Picard (v2.23.0) . Single nucleotide variants (SNVs) and small insertions and deletions (INDELs) were obtained by taking the union of three callers GATK4 Mutect2 , VarDict , and MuTect . All mutations were annotated by snpEff (v4.2), and ANNOVAR (v2019Dec03). All functional mutations, including missense, nonsense, splicing, nonstop SNVs, and INDELs, were obtained. Homemade pipelines were used to filter SNVs and INDELs: 1) mutations were called by more than one software; 2) variant allele frequencies (VAFs) were ≥ 10% and ≥ 4 individual mutant reads. Analysis of copy number variant Sequencing coverage and copy number in the aligned sequencing reads from WES were analyzed using CNVkit (v0.9.7) . The sequencing coverage of WES in germline samples was assessed and used to create pooled reference data that included the technical variability at each exon region. The read depths of tumor samples were individually compared with the reference after normalization (corrected for GC content, target footprint size and spacing, and repetitive sequences). The copy number segments were inferred by the circular binary segmentation algorithm . RNA sequencing and analysis We utilized the KAPA RNA Hyper Prep Kit (Kapa Biosystems, Cat No. KK8544) for library preparation. Sequencing was performed in the Illumina Nova S4 platform. The Illumina bcl2fastq Conversion Software was used to convert base call (BCL) files into FASTQ files. The sequences were aligned to hg38 reference genome using HISAT2 and gene expression levels were quantified using RSEM . Count correction was performed using the removeBatchEffect function from the limma R package and then the batch-corrected expression matrix that used to heatmap analysis were constructed based on log-normalized transcripts per million (TPM) of each gene. P value < 0.05 and |logFoldChange| ≥ 1.5 was set as the threshold for significantly differential expression. Gene Set Enrichment Analysis (GSEA) was performed based on Gene Ontology (GO) database and KEGG. Statistical analysis Data were presented as mean ± SEM, or as medians and interquartile ranges (IQR), whilst categorical variables were presented as percentages and absolute numbers. Statistical analysis was performed using IBM SPSS Statistics Version 26.0 and GraphPad Prism 8. All P values were two-sided, with P < 0.05 considered statistically significant. Fresh primary ACC tissues were obtained from patients upon resection or biopsy at Ruijin Hospital between September 2020 to July 2023. The ACC diagnosis was confirmed by experienced pathologists, and steroidogenic factor 1 (SF1) immunostaining was performed to confirm its adrenal cortex origin. Clinicopathological information including age, sex, ENSAT stage, Ki67 index and hormonal secretion status, systemic therapies received prior to surgery or biopsy was recorded and analyzed. Hormonal secretion status was evaluated using biochemical testing of serum steroid hormone levels (e.g., cortisol, aldosterone and androgens) and 1mg dexamethasone suppression test. Informed consent was obtained from all patients, and this study was approved by local ethics committee of Ruijin Hospital (Approval number: KY320). Upon surgical or biopsy removal, pieces of tumors were fixed in formalin and paraffin-embedded for pathological diagnosis. For primary cell cultures, tumor tissues were placed in Tissue Storage Solution (Miltenyi Biotec, Cat No.130-100-008). Additional tissues were immediately snap-frozen in liquid nitrogen for later use. The overview of the tissue process pipeline was summarized in . Immediately after surgery or biopsy, tumor tissues were collected in the Tissue Storage Solution (Miltenyi Biotec, Cat No.130-100-008) and transported to the laboratory on ice and isolated within 24 hours. Tumor tissues were rinsed with Hanks’ Balanced Salt Solution (HBSS; Gibco, Cat No.14175095), minced and digested with 2.0 mg/mL collagenase II (Gibco, Cat No. 17101015), 0.02 mg/mL DNase (Roche, Cat No. 11284932001) at 37°C on a shaker for up to 2 hours. Then this suspension was filtered through a 70-µm cell strainer (Falcon, Cat No.352350). After depletion of red blood cells using Red Blood Cell Lysis Buffer (Invitrogen, Cat No.00-4333-57), trypan blue staining (Gibco, Cat No.15250061) was performed for cell counting and viability assessment. After cell preparation, one portion was plated directly into 96-well plates (Corning, Cat No.3799) for mitotane sensitivity testing, whilst a small number of cells was plated in chamber slides (Millipore, Cat No. PEZGS0816) or CellCarrier Ultra plates (PerkinElmer, Cat No.6055300) for immunofluorescence staining of adrenal cortex marker, SF1 (Proteintech, Cat No.18658-1-AP). When cell amount was abundant, cells would be plated in 384-well plate for HTS and cryopreserved in liquid nitrogen for later use. Cells then were cultured in DMEM/F-12 (Gibco, Cat No. 11320033) medium supplemented with 10% fetal bovine serum (FBS; Gibco, Cat No. 10099141), 1% Penicillin-Streptomycin (10,000 U/mL; Gibco, Cat No. 15140-122) and 1% L-glutamine (200 mM; Gibco, Cat No. 25030-081). mitotane sensitivity testing Mitotane (MedChemExpress, Cat No. HY-13690) was dissolved in dimethyl sulfoxide (DMSO, Sigma, Cat No. D2650) to a concentration of 100mM as stock solution, aliquoted and stored at -80°C. For in vitro experiments, the final concentration of DMSO was ≤0.1%. Primary cells were plated in a 96-well plate at a density of 1.0x10^4 cells/well in triplicates and treated with mitotane (1.0μM-100μM) for 72 hours and cell viability was assessed by the Cell Counting Kit-8 (CCK-8) assay kit (Dojindo, Cat No. CK04). Mitotane-sensitive ACC cell line H295R (ATCC ® CRL2128™) was used as the positive control. Dose-response curves, inhibition rate, half maximal inhibitory concentration (IC50) values and area under dose response curve (AUC) were calculated in Prism8.3 software (GraphPad). PDCs were arbitrarily classified as non-responders when the inhibitory effect on cell viability was less than 33% at the concentration of mitotane corresponding to the therapeutic circulating plasma concentration (14 mg/L, 50 µM) according to the previous study . Briefly, cells were fixed with 4% paraformaldehyde for 15 min at room temperature, and then washed twice with PBS buffer (Sangon Biotech, Cat No. B548117), followed by permeabilization with 0.1% Triton X-100 (Sigma-Aldrich, Cat No. 9036-19-5) for 15min. Next, cells were washed twice with PBS and blocked using antibody diluent (DAKO, Cat No. s3022) for 1h at room temperature. Later, cells were incubated with primary antibody against SF1 (1:100, Proteintech, Cat No.18658-1-AP) at 4°C overnight, followed by YSFluor 594-conjugated secondary antibodies (1:500, Yeasen Biotechnology, Cat No. 34212ES60). Nuclei were stained with 4,6-diamidino-2 phenylindole (DAPI) and wells were mounted using DAPI Fluoromount-G (SouthernBiotech, Cat No. 0100-20). Cells were plated in 384-well plates (PerkinElmer, Cat No.6007680) at a density of 2000 cells per well in 50μl total volume. HTS was performed in an automated Cell::explorer HTS pro Platform (PerkinElmer). 24 hours after seeding, cells were treated with test compounds using a robot plate::handler equipped with a pintool dispensing device (PerkinElmer) for 6 days. HTS was conducted in single with four concentrations for each compound. DMSO was used as the vehicle control. Cell viability was determined using CellTiter-Glo reagent (Promega, Cat No. G7572) and luminescence was measured on an EnVision multimode plate reader (PerkinElmer). Dose-response data were analyzed. IC50 and AUC were calculated in Prism8.3 software (GraphPad). Genomic DNA and total RNA were extracted from snap-frozen tumor tissues or patient-derived primary cell pellets using the AllPrep DNA/RNA Micro Kit (Qiagen, Cat No. 80284) according to the manufacturer’s instructions. DNA extraction from peripheral blood leukocytes was carried out using the QIAamp DNA Mini Kit (Qiagen, Cat No. 51304). DNA and RNA concentrations were evaluated on Qubit Fluorometer (Thermo Fisher Scientific). WES was performed on the tumor DNA and matched blood DNA. Briefly, genomic DNA of tumor and paired peripheral blood samples from 9 patients was randomly sheared through ultra-sonication to generate paired-end libraries with an average insert size of ~300 bp. Exome regions were captured by the xGen Exome Hyb Panel v2 kit (Integrated DNA Technologies, Cat No. 10005153), and sequencing was performed on Illumina Novaseq 6000 platform (Illumina, San Diego, CA, USA) with 150 bp paired end strategy. The paired-end reads from WES were mapped to human reference genome (hg19) by BWA aligner (v0.7.17) . Mapping results were then sorted and marked for duplications via Picard (v2.23.0) . Single nucleotide variants (SNVs) and small insertions and deletions (INDELs) were obtained by taking the union of three callers GATK4 Mutect2 , VarDict , and MuTect . All mutations were annotated by snpEff (v4.2), and ANNOVAR (v2019Dec03). All functional mutations, including missense, nonsense, splicing, nonstop SNVs, and INDELs, were obtained. Homemade pipelines were used to filter SNVs and INDELs: 1) mutations were called by more than one software; 2) variant allele frequencies (VAFs) were ≥ 10% and ≥ 4 individual mutant reads. Sequencing coverage and copy number in the aligned sequencing reads from WES were analyzed using CNVkit (v0.9.7) . The sequencing coverage of WES in germline samples was assessed and used to create pooled reference data that included the technical variability at each exon region. The read depths of tumor samples were individually compared with the reference after normalization (corrected for GC content, target footprint size and spacing, and repetitive sequences). The copy number segments were inferred by the circular binary segmentation algorithm . We utilized the KAPA RNA Hyper Prep Kit (Kapa Biosystems, Cat No. KK8544) for library preparation. Sequencing was performed in the Illumina Nova S4 platform. The Illumina bcl2fastq Conversion Software was used to convert base call (BCL) files into FASTQ files. The sequences were aligned to hg38 reference genome using HISAT2 and gene expression levels were quantified using RSEM . Count correction was performed using the removeBatchEffect function from the limma R package and then the batch-corrected expression matrix that used to heatmap analysis were constructed based on log-normalized transcripts per million (TPM) of each gene. P value < 0.05 and |logFoldChange| ≥ 1.5 was set as the threshold for significantly differential expression. Gene Set Enrichment Analysis (GSEA) was performed based on Gene Ontology (GO) database and KEGG. Data were presented as mean ± SEM, or as medians and interquartile ranges (IQR), whilst categorical variables were presented as percentages and absolute numbers. Statistical analysis was performed using IBM SPSS Statistics Version 26.0 and GraphPad Prism 8. All P values were two-sided, with P < 0.05 considered statistically significant. Establishment of ACC primary cultures PDCs were successfully obtained from 17 ACCs upon surgery or biopsy, including six primary tumors, two local recurrent and nine metastatic tumors (lung, liver, etc.). SF1 immunofluorescence staining was performed to confirm the adrenal cortex origin of tumor cells . Patient characteristics are listed in . Of note, six patients received mitotane prior to surgical or biopsy intervention for a period of 2 months to 18 months, but underwent disease progression or lacked satisfactory response. The principal aim was to evaluate response of PDCs to mitotane and identify biomarkers to predict sensitivity . Since the yield of the dissociation procedure varied because of differences in the size of the tumor tissue available, only when cell amount permitted, HTS would be performed to seek for other potential agents. ACC PDCs depict differential sensitivity to mitotane in vitro First of all, we performed in vitro sensitivity testing in PDCs for a 3-day mitotane exposure, allowing to exclude the impact of patient tolerance or pharmacokinetics. Cell viability inhibition at 50 µM mitotane is used to group ACCs into responders (>33% inhibition) and non-responders (≤33% inhibition). The median cell viability inhibition rate at 50 µM mitotane was 30.4% (IQR: -7.1%-47.9%). Eight patients (47%) were classified as responders with inhibition rate reached 48.4% (IQR: 39.3%-59.3%) whereas nine (53%) non-responders were scarcely inhibited by 50µM mitotane, with median inhibition rate of -1.2% (IQR: -26.4%-22.1%) . Dose-response curves showed the different potency of mitotane in the two groups as non-responders had higher IC50 values. Median IC50 for responders and non-responders were 53.4 µM (47.8-54.4µM) and 74.7 µM (70.9-98.8µM), respectively (P<0.0001). AUC were greatly increased compared to responders, and estimated AUCs were158.0 (142.1-164.3) and 213.5 (194.5-273.1) in responders and non-responders, respectively (P<0.0001) . Clinical response data were obtained from eight patients: all three non-responders showed clinical progressive disease; three responders showed clinical stable disease, while two responders progressed . The consistence rate between in vitro test and clinical response is 75% (6/8). Comparison of patient characteristics between mitotane responders and non-responders Comparison of clinicopathological characteristics between responders and non-responders demonstrated no significant differences regarding as age (57.8 ± 12.4 vs 45.9 ± 24.3, P=0.233), gender (P=0.149), ENSAT staging (P=0.704) and Ki67 index (P=0.766). Noteworthily, functional tumors with steroid hormone secretion showed a tendency of better response in vitro than non-functional ones (66.7% vs 25.0%, P=0.086) . A negative correlation was found between tumor functionality and AUC (Spearman correlation coefficient= -0.481, P=0.051) with marginal significance, in line with above findings, indicating a tendency that tumors with active hormonal function might respond better to mitotane treatment. Genetic analysis discovers features associated with mitotane sensitivity in vitro To identify molecular factors contributing to mitotane response, we then conducted genomic (WES) and transcriptomic sequencing (RNAseq) on tissue samples or primary cell pellets when there were no additional tissues available. In order to reveal intrinsic genetic features underlying mitotane sensitivity rather than acquired molecular features induced by mitotane treatment, a total of nine samples free from mitotane exposure were sequenced, from five responders and four non-responders classified by in vitro sensitivity testing. Alterations in the established driver genes including SNVs and CNVs were observed in P53/RB cell-cycle pathway (8/9, 88.9%) and Wnt/β-Catenin signaling pathway (9/9, 100%) . 4/5 responders and 3/4 non-responders harbored genetic alterations in both pathways concurrently. More specifically, somatic mutations in TP53 were found in 3 patients (2/5 responders and 1/4 non-responders) and loss of TP53 was found in 1 responder (1/5). RB1 mutation was identified in 1/5 responders and 2/4 non-responders. CNV gain or amplification in CDK4 , CCNE1 and MDM2 were identified in 7 patients. It was well acknowledged that CTNNB1 mutations and ZNRF3 alterations were mutually exclusive . Surprisingly, we found their exclusive presence in responders and non-responders. Responder group merely harboring CTNNB1 somatic mutations (3/5) while non-responder group presenting only ZNRF3 alterations (3/4). Moreover, APC alterations were observed in 4 responders and 4 non-responders. However, whether CTNNB1 and ZNRF3 alterations render differential intrinsic sensitivity to mitotane requires further investigations. RNAseq was performed to investigate gene expression signatures. A total of 1612 genes were differentially expressed (|logFoldChange| ≥ 1.5, P<0.05) between responder and non-responder tumors . Evidence has accumulated that mitotane dysregulated lipid metabolism and raised the potential correlation between mitotane responsiveness and capacity of handling lipids . From our transcriptome data, to be noted, expression of genes involved in steroidogenesis ( CYP11B1 ) and lipid metabolism ( CYP27A1, ABCA1, PLIN2, PLIN4, NR1H3 , etc) were significantly upregulated in mitotane-sensitive tumors , implying elevated capacity for handling intracellular lipids. Consistently, functional enrichment analysis using GSEA showed pathways associated with lipid metabolism are significantly upregulated in responders and non-responders including lipid metabolic process, lipid catabolic process, lipid oxidation, cholesterol metabolic process and steroid metabolic process, etc, possibly underlying tumor functionality . On the other hand, Wnt signaling pathway and cell cycle process were significantly downregulated in the responder group . To further investigate marker genes correlated to in vitro mitotane responsiveness, Spearman correlation analysis was performed between gene expression levels and response data of AUC. A list of genes previously reported as key regulators of lipid metabolism (uptake, biosynthesis, storage, and lipolysis, efflux, etc), steroidogenesis as well as genes priorly proposed to be potentially predictive for mitotane response were analyzed. We failed to find correlation between RRM1 , SOAT1 , CYP2W1 mRNA expression level and mitotane responsiveness . Of note, oxysterol synthetic enzyme, CYP27A1 and cholesterol efflux pump, ABCA1 were negatively correlated to AUC , denoting the higher expression of CYP27A1 and ABCA1 , the lower of the AUC value and the better of in vitro responsiveness to mitotane. As a pivotal mechanism for preventing intracellular free cholesterol accumulation, it was tempting to speculate that higher CYP27A1 and ABCA1 implied higher intracellular free cholesterol at baseline, which required enhanced conversion and efflux ability, thus more susceptible to mitotane. These findings indicated that the dependence on the higher capacity for lipid metabolism to maintain intracellular lipid balance conferred ACC more vulnerable to mitotane. Pharmacologic analysis reveals potential active agents against primary ACC cells In order to uncover potential therapy for ACC especially those mitotane non-responders, we designed and set up a compound library containing 40 compounds in four concentrations. Primarily, drugs or compounds were chosen based on the following criteria: 1) The drug was FDA-approved or in clinical trials; 2) The compound had been reported effective in ACC models or proposed as potential targeted anti-cancer treatments for ACC . Drugs and highest concentrations used in HTS as well as references are listed in . Aiming to establish a differential cytotoxicity assay, a 6-day treatment with compounds were performed in PDCs. Eight patient-derived ACC cells (four responders and four non-responder) were tested in a proliferative assay against our in-house library in 384-well plates. Surprisingly, both mitotane responsive and non-responsive ACC cells were extremely vulnerable to disulfiram treatment. Antihelminthic agent, niclosamide, and proteasome inhibitor, bortezomib, which were previously reported effective in ACC cell lines , were identified efficacious in 6/8 and 5/8 PDCs, with estimated IC50 ranging from 0.22μM to 0.77μM and 10nM to 50nM, respectively. Furthermore, doxorubicin and cisplatin were effective in 3/8 and 2/8 ACCs, respectively. Additionally, PI-103, a PI3K/mTOR inhibitor, was active in 4/8 ACCs and primary culture derived from Patient 13 demonstrated sensitivity to multi-targeted tyrosine kinase inhibitor, sunitinib and anlotinib . PDCs were successfully obtained from 17 ACCs upon surgery or biopsy, including six primary tumors, two local recurrent and nine metastatic tumors (lung, liver, etc.). SF1 immunofluorescence staining was performed to confirm the adrenal cortex origin of tumor cells . Patient characteristics are listed in . Of note, six patients received mitotane prior to surgical or biopsy intervention for a period of 2 months to 18 months, but underwent disease progression or lacked satisfactory response. The principal aim was to evaluate response of PDCs to mitotane and identify biomarkers to predict sensitivity . Since the yield of the dissociation procedure varied because of differences in the size of the tumor tissue available, only when cell amount permitted, HTS would be performed to seek for other potential agents. in vitro First of all, we performed in vitro sensitivity testing in PDCs for a 3-day mitotane exposure, allowing to exclude the impact of patient tolerance or pharmacokinetics. Cell viability inhibition at 50 µM mitotane is used to group ACCs into responders (>33% inhibition) and non-responders (≤33% inhibition). The median cell viability inhibition rate at 50 µM mitotane was 30.4% (IQR: -7.1%-47.9%). Eight patients (47%) were classified as responders with inhibition rate reached 48.4% (IQR: 39.3%-59.3%) whereas nine (53%) non-responders were scarcely inhibited by 50µM mitotane, with median inhibition rate of -1.2% (IQR: -26.4%-22.1%) . Dose-response curves showed the different potency of mitotane in the two groups as non-responders had higher IC50 values. Median IC50 for responders and non-responders were 53.4 µM (47.8-54.4µM) and 74.7 µM (70.9-98.8µM), respectively (P<0.0001). AUC were greatly increased compared to responders, and estimated AUCs were158.0 (142.1-164.3) and 213.5 (194.5-273.1) in responders and non-responders, respectively (P<0.0001) . Clinical response data were obtained from eight patients: all three non-responders showed clinical progressive disease; three responders showed clinical stable disease, while two responders progressed . The consistence rate between in vitro test and clinical response is 75% (6/8). Comparison of clinicopathological characteristics between responders and non-responders demonstrated no significant differences regarding as age (57.8 ± 12.4 vs 45.9 ± 24.3, P=0.233), gender (P=0.149), ENSAT staging (P=0.704) and Ki67 index (P=0.766). Noteworthily, functional tumors with steroid hormone secretion showed a tendency of better response in vitro than non-functional ones (66.7% vs 25.0%, P=0.086) . A negative correlation was found between tumor functionality and AUC (Spearman correlation coefficient= -0.481, P=0.051) with marginal significance, in line with above findings, indicating a tendency that tumors with active hormonal function might respond better to mitotane treatment. in vitro To identify molecular factors contributing to mitotane response, we then conducted genomic (WES) and transcriptomic sequencing (RNAseq) on tissue samples or primary cell pellets when there were no additional tissues available. In order to reveal intrinsic genetic features underlying mitotane sensitivity rather than acquired molecular features induced by mitotane treatment, a total of nine samples free from mitotane exposure were sequenced, from five responders and four non-responders classified by in vitro sensitivity testing. Alterations in the established driver genes including SNVs and CNVs were observed in P53/RB cell-cycle pathway (8/9, 88.9%) and Wnt/β-Catenin signaling pathway (9/9, 100%) . 4/5 responders and 3/4 non-responders harbored genetic alterations in both pathways concurrently. More specifically, somatic mutations in TP53 were found in 3 patients (2/5 responders and 1/4 non-responders) and loss of TP53 was found in 1 responder (1/5). RB1 mutation was identified in 1/5 responders and 2/4 non-responders. CNV gain or amplification in CDK4 , CCNE1 and MDM2 were identified in 7 patients. It was well acknowledged that CTNNB1 mutations and ZNRF3 alterations were mutually exclusive . Surprisingly, we found their exclusive presence in responders and non-responders. Responder group merely harboring CTNNB1 somatic mutations (3/5) while non-responder group presenting only ZNRF3 alterations (3/4). Moreover, APC alterations were observed in 4 responders and 4 non-responders. However, whether CTNNB1 and ZNRF3 alterations render differential intrinsic sensitivity to mitotane requires further investigations. RNAseq was performed to investigate gene expression signatures. A total of 1612 genes were differentially expressed (|logFoldChange| ≥ 1.5, P<0.05) between responder and non-responder tumors . Evidence has accumulated that mitotane dysregulated lipid metabolism and raised the potential correlation between mitotane responsiveness and capacity of handling lipids . From our transcriptome data, to be noted, expression of genes involved in steroidogenesis ( CYP11B1 ) and lipid metabolism ( CYP27A1, ABCA1, PLIN2, PLIN4, NR1H3 , etc) were significantly upregulated in mitotane-sensitive tumors , implying elevated capacity for handling intracellular lipids. Consistently, functional enrichment analysis using GSEA showed pathways associated with lipid metabolism are significantly upregulated in responders and non-responders including lipid metabolic process, lipid catabolic process, lipid oxidation, cholesterol metabolic process and steroid metabolic process, etc, possibly underlying tumor functionality . On the other hand, Wnt signaling pathway and cell cycle process were significantly downregulated in the responder group . To further investigate marker genes correlated to in vitro mitotane responsiveness, Spearman correlation analysis was performed between gene expression levels and response data of AUC. A list of genes previously reported as key regulators of lipid metabolism (uptake, biosynthesis, storage, and lipolysis, efflux, etc), steroidogenesis as well as genes priorly proposed to be potentially predictive for mitotane response were analyzed. We failed to find correlation between RRM1 , SOAT1 , CYP2W1 mRNA expression level and mitotane responsiveness . Of note, oxysterol synthetic enzyme, CYP27A1 and cholesterol efflux pump, ABCA1 were negatively correlated to AUC , denoting the higher expression of CYP27A1 and ABCA1 , the lower of the AUC value and the better of in vitro responsiveness to mitotane. As a pivotal mechanism for preventing intracellular free cholesterol accumulation, it was tempting to speculate that higher CYP27A1 and ABCA1 implied higher intracellular free cholesterol at baseline, which required enhanced conversion and efflux ability, thus more susceptible to mitotane. These findings indicated that the dependence on the higher capacity for lipid metabolism to maintain intracellular lipid balance conferred ACC more vulnerable to mitotane. In order to uncover potential therapy for ACC especially those mitotane non-responders, we designed and set up a compound library containing 40 compounds in four concentrations. Primarily, drugs or compounds were chosen based on the following criteria: 1) The drug was FDA-approved or in clinical trials; 2) The compound had been reported effective in ACC models or proposed as potential targeted anti-cancer treatments for ACC . Drugs and highest concentrations used in HTS as well as references are listed in . Aiming to establish a differential cytotoxicity assay, a 6-day treatment with compounds were performed in PDCs. Eight patient-derived ACC cells (four responders and four non-responder) were tested in a proliferative assay against our in-house library in 384-well plates. Surprisingly, both mitotane responsive and non-responsive ACC cells were extremely vulnerable to disulfiram treatment. Antihelminthic agent, niclosamide, and proteasome inhibitor, bortezomib, which were previously reported effective in ACC cell lines , were identified efficacious in 6/8 and 5/8 PDCs, with estimated IC50 ranging from 0.22μM to 0.77μM and 10nM to 50nM, respectively. Furthermore, doxorubicin and cisplatin were effective in 3/8 and 2/8 ACCs, respectively. Additionally, PI-103, a PI3K/mTOR inhibitor, was active in 4/8 ACCs and primary culture derived from Patient 13 demonstrated sensitivity to multi-targeted tyrosine kinase inhibitor, sunitinib and anlotinib . It is important to identify predictive factors associated with mitotane efficacy in ACC for patient selection and seek other potential treatment. In current study, we revealed 1) variable sensitivity to mitotane in primary ACC cultures; 2) response to mitotane might be associated with the capacity for lipid metabolism; 3) potential drug repurposing opportunities for existing drugs including disulfiram, niclosamide and bortezomib. The overall clinical efficacy of mitotane in ACC patients were 10%-35%. In our in vitro assay, 8 patients (47%) were classified as responders. A higher response rate in vitro was also observed in Dr.Hofland’s study . This phenomenon may be due to the fact that in clinical settings, the concentration of mitotane reaches to the therapeutic effect level in only about 50% of patients , while the mitotane concentration in in vitro test would be homologous. In a pilot cytotoxicity study, we found there was no difference between 3-day and 6-day treatment of mitotane in term of IC50 measurement, which is consistent with previous study that mitotane exerts its cellular effect within the first 24 hours in vitro occurred early after exposure . Therefore, we adopted a 3-day assay of mitotane in PDCs. The short-term culture could also avoid fibroblast outgrowth and managed to differentiate the heterogenous response. Here a cut-off of 33% reduction in cell viability was used as an index of in vitro sensitivity with 50µM mitotane treatment. The consistence rate between mitotane in vitro sensitivity test and clinical response is 75%. A larger sample size with more in vitro sensitivity testing and corresponding clinical response to mitotane in the respective patients certainly would be necessary for determination of the most appropriate cut-off value. Our results indicated that a 3-day in vitro mitotane sensitivity testing was technically feasible for rapid mitotane response prediction. Comparison of clinical features among responders and non-responders, we found hormonally active tumors tended to respond better to mitotane exposure in vitro . Six of nine functional ACCs were in the responder group. Specifically, in the responder group, functional tumors accounted for 75% (6/8) composing of four cortisol-secreting ACC with one androgen co-secretion and two androgen-secreting ACC, while in the non-responder group cortisol-secreting (2/9) and aldosterone-secreting (1/9) ACC accounted for 22.2% and 11.1%, respectively. This was consistent with Dr.Hofland’s findings that the proportion of cortisol-producing ACC was highest in the responder group (73%), with a gradually decreasing percentage from the partial responder (43%) to the non-responder group (14%, P = 0.068) . Possible association between tumor hormonal activity and in vitro mitotane sensitivity was also implicated in transcriptomic features. Our transcriptome data revealed that CYP11B1 was the most upregulated gene and the steroid hormone metabolic process was significantly enriched in the responder group, in support of previous discovery that the metabolic activation of mitotane is mainly dependent on CYP11B1 . Elevated mRNA expression of CYP27A1 and ABCA1 was identified to be correlated with higher mitotane sensitivity. Mitochondrial hydroxylase CYP27A1 is a key enzyme responsible for converting cholesterol to oxysterol, namely the 27-hydroxycholesterol (27HC). It acts as liver X receptor (LXR) agonist and upregulates expression of cholesterol efflux pumps (i.e., ABCA1 and ABCG1) to prevent intracellular cholesterol accumulation . CYP27A1 is abundant in adrenal cortex, more pronounced in zona fasciculata . Oxysterol/LXR involves in adrenal steroidogenesis and serves as a safety valve to limit free cholesterol levels, thereby protecting adrenal cortex from lipotoxicity . Because mitotane could cause lipotoxicity in ACC cells through targeting lipolysis and cholesterol storage , we hypothesize that ACCs expressing higher level of CYP27A1 and ABCA1 might tightly depend on its capacity of handling cholesterol flux, thus vulnerable to disturbance of lipid homeostasis induced by mitotane. CTNNB1 mutation and ZNRF3 alterations are among the most common somatic changes in ACC . The genomic analysis uncovered ZNRF3 alteration in three (3/4) non-responders (3/4) and CTNNB1 alteration in three (3/5) responders. However, to elucidate the relationship of CTNNB1 and ZNRF3 alteration to mitotane response needs further investigations. A higher percentage of patients harboring alterations affecting both TP53/RB and Wnt/β-Catenin pathway was observed, which might be due to the fact that patients included in this study were more aggressive with dismal outcomes . Additionally, a significant enrichment of Wnt signaling and cell cycle process in non-responder group was observed from transcriptomic data, indicating a more pronounced dysregulation of these two pathways. Given the relatively small sample size, these observations need more cautious interpretation. Improved therapeutics for advanced ACC have long been the unmet medical need. Here, we used PDCs for HTS aiming to identify potential agents for ACC, particularly to explore drug repurposing chances. Patient 12 (P12) had previously received chemotherapy regimen (etoposide and carboplatin) for four cycles but suffered progressive disease. Primary cells derived from this patient showed great resistance to etoposide and oxaliplatin but sensitive to cisplatin. There was a good consistency in clinical and in vitro response to etoposide. However, the differential sensitivity to cisplatin and oxaliplatin, carboplatin might be attributed to different potency and mode of action of these platinum analogues . Notably, niclosamide and bortezomib were highly efficacious in PDCs with IC50 below the known maximum plasma concentration (Cmax) in human,18.34 μmol/L and 120.3 ng/ml for niclosamide and bortezomib, respectively . Applying PDCs in drug repurposing might be a promising strategy to guide personalized therapy in ACC. Our study has the strength of integrating genomic, transcriptomic, and pharmacological analysis of ACC PDCs to identify molecular biomarkers associated with mitotane response and performing HTS against PDCs to uncover potential active agents for the first time. Efforts have been made to identify correlations between in vitro mitotane response and clinical response and the consistence rate reached 75% (6/8). But still, our research has several limitations. First, there was a lack of available mitotane plasma concentrations which might be responsible for clinical progressive disease in “responders”. In six metastatic patients, clinical response to the lesions where primary culture derived could not be evaluated for they underwent locoregional treatments including surgery (one patient), radiofrequency ablation (RFA, four patients) and transarterial embolization (TAE, one patient). Second, the number of primary cultures tested were still limited because of the rarity of the ACC. A larger cohort would be required for establishment of more robust gene-drug associations. In summary, ACC PDC models provided a feasible approach for pharmacologic sensitivity evaluation to guide personalized therapies. Clinical features and transcriptomic signatures suggested the hormonal secretion activity of ACC might be associated with response to mitotane, warranting further investigation. Future research needs to confirm whether the CYP27A1 and ABCA1 expression level could be used as mitotane sensitivity predictor. The original contributions presented in the study are publicly available. This data can be found here: https://ngdc.cncb.ac.cn/gsa-human/ , accession number: HRA006596. The studies involving humans were approved by local ethics committee of Ruijin Hospital. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. JZha: Formal analysis, Methodology, Visualization, Writing – original draft, Writing – review & editing. LW: Investigation, Resources, Writing – original draft, Writing – review & editing. TS: Investigation, Resources, Writing – review & editing. HLiu: Methodology, Visualization, Writing – review & editing. LJ: Investigation, Resources, Writing – review & editing. YJ: Investigation, Resources, Writing – review & editing. ZW: Investigation, Resources, Writing – review & editing. LC: Investigation, Resources, Writing – review & editing. HLi: Methodology, Writing – review & editing. JZhe: Methodology, Visualization, Writing – review & editing. YS: Methodology, Visualization, Writing – review & editing. HP: Methodology, Writing – review & editing. RH: Methodology, Writing – review & editing. GN: Supervision, Writing – review & editing. LY: Conceptualization, Supervision, Writing – review & editing. WW: Conceptualization, Supervision, Writing – review & editing. |
A pragmatic and scalable strategy using mobile technology to promote sustained lifestyle changes to prevent type 2 diabetes in India and the UK: a randomised controlled trial | 44559d4e-ac50-49e6-8a45-dacdc03e283a | 6997257 | Preventive Medicine[mh] | The public health challenge of type 2 diabetes is set to worsen as the prevalence rises from 425 million people globally in 2017 to 629 million by 2045 . Diabetes is preceded by a period of intermediate hyperglycaemia (prediabetes), during which lifestyle interventions have been shown to reduce progression to diabetes in several RCTs . The interventions in the initial diabetes prevention RCTs were labour intensive and difficult to scale up to reach large numbers of people at risk. Simple and scalable approaches to educate and motivate at-risk individuals to make behavioural changes using mobile phone short message service (SMS) text messages have been developed in several areas of preventive medicine . In a previous RCT in India, we demonstrated that the delivery of a package of customised, tailored SMS messages based on the transtheoretical model (TTM) of behaviour change was effective compared with standard care, reducing the incidence of type 2 diabetes by 36% over 2 years . In that RCT we recruited working Asian Indian men with persistent prediabetes defined as impaired glucose tolerance on two OGTTs. This method for defining prediabetes (and for assessing progression to diabetes) is time consuming for participants and the healthcare system and is difficult to scale up at societal level. In the current study, we wished to test the generalisability of the results from the previous trial in India . To do this, first, we included women as well as men; second, as the previous study was relatively small (537 participants), a larger number of participants was recruited; third, we tested the intervention in two ethnically and culturally different environments, India and the UK, using similar primary and secondary outcomes in both countries, with only minor differences reflecting the different populations and settings; finally, we used a more pragmatic method than glucose estimations to define hyperglycaemia, HbA 1c , as recommended by the WHO . The protocol permitted a comparative pooled analysis of outcomes from the two populations including an exploration of reasons for potential heterogeneity in the results. Study design and participants The detailed protocol has been reported previously . In brief, a randomised, controlled clinical trial was conducted over 2 years in people with prediabetes defined by an HbA 1c level of ≥42 and ≤47 mmol/mol (≥6.0% and ≤6.4%) (the high prediabetes range). Screening for possible participants took place in workplaces in India and at National Health Service (NHS) Health Checks and in primary care centres in the UK. All participants received structured education for prediabetes and the intervention group received, in addition, SMS messages about lifestyle 2–3 times weekly during the trial. Participants were monitored at baseline and at 6, 12 and 24 months, undertaking repeat assessment of HbA 1c and blood glucose levels and completing questionnaires (the Euro quality of life 5 dimension 3 level [EQ-5D-3L], a recent physical activity questionnaire [RPAQ], a TTM of behavioural change questionnaire, and food frequency [UK] or 24 h dietary recall [India]). Physical activity (by accelerometer; ActiGraph GT3X+, ActiGraph, Pensacola, FL, USA) and acceptability of the SMS were monitored at baseline and during follow-up. The primary outcome was progression to diabetes. The secondary outcomes included anthropometric measurements, other cardiovascular risk factors and measures of lifestyle behaviours. Figure shows the flow diagram of pre-screening, screening, enrolment and randomisation, and the numbers of participants in the two countries. The total number of participants included in the analysis was 2062 (1031 in the control group, 1031 in the intervention group). Pre-screening and screening In India, pre-screening to identify people at high risk of developing diabetes was undertaken between April of 2012 and November of 2015 in Asian Indian men and women aged 35–55 years in Chennai and surrounding areas. The target population was employees from public and private sector organisations. Following a diabetes awareness programme, participants with no personal history of diabetes or other major physical or mental illness having three or more risk factors, including age 35–55 years, BMI ≥23 kg/m 2 , waist circumference ≥90 cm in men and ≥80 cm in women, first degree family history of type 2 diabetes, history of hypertension or prediabetes, or habitual sedentary behaviour, were selected for further screening using HbA 1c . Those with values in the high prediabetes range (≥42 and ≤47 mmol/mol [≥6.0% and ≤6.4%]) were invited to participate in the trial. In the UK, pre-screening was conducted mainly using the NHS Health Checks programme, which is a cardiovascular and diabetes risk assessment offered routinely and free of charge to people aged 40–74 years without pre-existing diabetes, cardiovascular disease or kidney disease. The programme operates in primary care; people who met the HbA 1c entry criteria (≥42 and ≤47 mmol/mol [≥6.0% and ≤6.4%]) were invited to participate in the trial if they fulfilled the other entry criteria. In some primary care centres, screening schemes other than the NHS Health Checks programme were used. Written, informed consent was obtained from all participants. Randomisation and masking Randomisation was performed in India using a computer-generated sequence, to either individually tailored mobile phone SMS messages supplementing baseline lifestyle advice (intervention group) or to a control group that received only the lifestyle modification advice at baseline (1:1). Randomisation in the UK was performed by a commercial organisation in random permuted blocks stratified by sex, age and BMI. Written, informed consent was obtained from the participants and in India permissions had also been obtained from the employers. In both countries, laboratory personnel and investigators were blinded to the participants’ group allocation until the end of the study. Staff involved in delivering the intervention and the participants themselves were, by necessity, not masked. The study was registered on www.ClinicalTrials.gov (India, NCT01570946; UK, NCT01795833). The trial was registered separately in the two countries since the funding was received from two different national agencies. Procedures for text messages At baseline, in both countries, all trial participants received personalised education and motivation about healthy diet and the benefits of enhanced physical activity. In addition, the intervention group received regular SMS messages, typically 2–3 per week, to provide additional education and motivation. The content of the messages provided by SMS was similar in both countries. The messages used in the previous study in India were modified and expanded. In the UK, a Patient and Public Involvement Group in the National Institute for Health Research Clinical Research Network provided input into the SMS message design and content. The messages provided tips, suggestions and positive reinforcement for healthy behaviours including goal setting, physical activity, dietary planning and personal strategies for lifestyle change. The message content was based on the TTM of behavioural change , a stage-based concept categorised by: precontemplation (not ready), contemplation (getting ready), preparation (ready), action and maintenance. We prepared an SMS database and grouped the messages to be appropriate for each TTM stage. There were 75–80 messages in each TTM stage. Messages were sent to the participants based on the TTM staging performed at each follow-up. The type and content of the messages were changed frequently to avoid repetition. Messages were delivered by commercial service providers. In India, messages were in English and in two local languages and were sent between 06:30 hours and 08:30 hours or after 18:00 hours, as preferred by the participants. In the UK, messages were sent at 10:00 hours on alternate days. The preferred time was ascertained during the follow-up visits so that the messages did not go unnoticed. Acceptability of SMS in the intervention group was assessed using a short questionnaire . Responses to questions about message content, frequency, ease of understanding, whether messages were considered a disturbance and whether they were perceived as helpful in improving lifestyle were scored as 0 or 1. A total score of 6 was the most acceptable and 0 the least. A modified acceptability questionnaire was used in the UK. Lifestyle and quality of life Diet At baseline, individualised dietary recommendations were delivered to balance food intake and physical activity, and to aim for an appropriate body weight. Advice included: avoidance of simple sugars and refined carbohydrates, reduction of total fat intake (<20 g/day) and inclusion of increased fibre-rich food (e.g. whole grains, legumes, vegetables and fruits). Evaluation was performed using 24 h dietary recall, a method used previously in India . In the UK arm, a food frequency questionnaire was used for calculation of dietary energy intake and major food constituents , a method previously validated against 24 h dietary recall . Physical activity Participants who reported being sedentary or who undertook only light physical activity at baseline were advised to walk briskly every day for a minimum of 30 min. People who reported strenuous occupations or sufficient physical activity per day were advised to continue these activities. Physical activity was assessed by self-report using the RPAQ, which has previously been shown to provide a valid estimate of physical activity energy expenditure (PAEE), measured by the gold standard criterion method of doubly labelled water and time spent in different intensity levels . We also assessed physical activity objectively using triaxial accelerometry (ActiGraph GT3X+) which has also been validated against criterion methods . Quality of life The EQ-5D-3L version for India was administered to capture the individuals’ ‘perceived’ quality of life based on the effects of the health intervention . The questionnaire consists of five dimensions (mobility, self-care, usual activities, pain/discomfort, anxiety/depression) and the responses record three levels of severity (no problems/some or moderate problems/extreme problems) within each dimension. The EQ-5D-3L summary measure was calculated using a value set derived from a UK sample since there are no published value sets in Indian populations. Biochemical assessments During the baseline and at each review, anthropometry, blood pressure (mean of two readings using sphygmomanometer), HbA 1c and serum lipid profile (total cholesterol, low density lipoprotein, HDL-cholesterol and triacylglycerols) were measured using standard enzymatic procedures with quality control. Ethics approvals In India, the Ethics Review Committee of the India Diabetes Research Foundation and Dr. A. Ramachandran’s Diabetes Hospitals reviewed and approved the study protocol. An independent safety committee assessed study progress with unmasked data at 6 month intervals. In the UK, approval was from the Westminster Research Ethics Committee and site specific assessment (SSA) plus research and development (R&D) approvals were in place at each participating NHS Trust. Imperial College Academic Health Science Centre acted as the main sponsor. Delegated responsibilities were assigned to the participating NHS trusts. Outcomes In the UK, the primary outcome was progression to diabetes as defined by international criteria for fasting plasma glucose or HbA 1c at any study review visit or in any healthcare setting. In India, information from study follow-up visits was available; thus, diabetes was defined on HbA 1c alone. The secondary outcomes were body weight and BMI, waist circumference, blood pressure, fasting plasma glucose, lipid levels, proportion achieving HbA 1c ≤42 mmol/mol (≤6.0%), acceptability of SMS, dietary variables, physical activity and quality of life. Statistical analyses Based on results from the Indian Diabetes Prevention study , a 2 year risk of diabetes in the control group of 25% was assumed. With 2268 participants (1134 per group), the trial had 80% power to detect a relative reduction in risk of 20% as significant at the 5% level, allowing for approximately 4% withdrawals. Baseline characteristics were summarised by randomised group using mean and standard deviation (continuous variables), median and interquartile range (continuous variables with a skewed distribution), or frequency and percentage (categorical variables). The primary outcome was compared between intervention and control groups using a discrete-time proportional hazards model with a complementary log-log link function, since the data were interval censored , adjusted for country. The multiplicative interactions between randomised group and (1) country and (2) sex were tested using a Wald test. The HR for type 2 diabetes and 95% confidence intervals were reported for the overall trial population, and separately by country (UK/India) and sex, which were the only pre-specified subgroups. Secondary outcomes, measured at specified time points during follow-up, were analysed using linear regression with random intercepts at the individual level to allow for repeated measures, including the baseline value of the outcome, country, randomised group and time, to estimate an overall intervention effect, and then also using randomised group × time interaction, to estimate intervention effects at each follow-up time. Accelerometer wear time was included in the model for objectively measured physical activity outcomes. Outcomes with a skewed distribution were log-transformed prior to analysis. The trial was analysed on an intention-to-treat basis. The primary outcome was also analysed in a per-protocol population, which excluded individuals in whom the intervention was not successfully delivered. All analyses were pre-specified , and performed using Stata version 14.2 (Stata, College Station, TX, USA). The detailed protocol has been reported previously . In brief, a randomised, controlled clinical trial was conducted over 2 years in people with prediabetes defined by an HbA 1c level of ≥42 and ≤47 mmol/mol (≥6.0% and ≤6.4%) (the high prediabetes range). Screening for possible participants took place in workplaces in India and at National Health Service (NHS) Health Checks and in primary care centres in the UK. All participants received structured education for prediabetes and the intervention group received, in addition, SMS messages about lifestyle 2–3 times weekly during the trial. Participants were monitored at baseline and at 6, 12 and 24 months, undertaking repeat assessment of HbA 1c and blood glucose levels and completing questionnaires (the Euro quality of life 5 dimension 3 level [EQ-5D-3L], a recent physical activity questionnaire [RPAQ], a TTM of behavioural change questionnaire, and food frequency [UK] or 24 h dietary recall [India]). Physical activity (by accelerometer; ActiGraph GT3X+, ActiGraph, Pensacola, FL, USA) and acceptability of the SMS were monitored at baseline and during follow-up. The primary outcome was progression to diabetes. The secondary outcomes included anthropometric measurements, other cardiovascular risk factors and measures of lifestyle behaviours. Figure shows the flow diagram of pre-screening, screening, enrolment and randomisation, and the numbers of participants in the two countries. The total number of participants included in the analysis was 2062 (1031 in the control group, 1031 in the intervention group). In India, pre-screening to identify people at high risk of developing diabetes was undertaken between April of 2012 and November of 2015 in Asian Indian men and women aged 35–55 years in Chennai and surrounding areas. The target population was employees from public and private sector organisations. Following a diabetes awareness programme, participants with no personal history of diabetes or other major physical or mental illness having three or more risk factors, including age 35–55 years, BMI ≥23 kg/m 2 , waist circumference ≥90 cm in men and ≥80 cm in women, first degree family history of type 2 diabetes, history of hypertension or prediabetes, or habitual sedentary behaviour, were selected for further screening using HbA 1c . Those with values in the high prediabetes range (≥42 and ≤47 mmol/mol [≥6.0% and ≤6.4%]) were invited to participate in the trial. In the UK, pre-screening was conducted mainly using the NHS Health Checks programme, which is a cardiovascular and diabetes risk assessment offered routinely and free of charge to people aged 40–74 years without pre-existing diabetes, cardiovascular disease or kidney disease. The programme operates in primary care; people who met the HbA 1c entry criteria (≥42 and ≤47 mmol/mol [≥6.0% and ≤6.4%]) were invited to participate in the trial if they fulfilled the other entry criteria. In some primary care centres, screening schemes other than the NHS Health Checks programme were used. Written, informed consent was obtained from all participants. Randomisation was performed in India using a computer-generated sequence, to either individually tailored mobile phone SMS messages supplementing baseline lifestyle advice (intervention group) or to a control group that received only the lifestyle modification advice at baseline (1:1). Randomisation in the UK was performed by a commercial organisation in random permuted blocks stratified by sex, age and BMI. Written, informed consent was obtained from the participants and in India permissions had also been obtained from the employers. In both countries, laboratory personnel and investigators were blinded to the participants’ group allocation until the end of the study. Staff involved in delivering the intervention and the participants themselves were, by necessity, not masked. The study was registered on www.ClinicalTrials.gov (India, NCT01570946; UK, NCT01795833). The trial was registered separately in the two countries since the funding was received from two different national agencies. At baseline, in both countries, all trial participants received personalised education and motivation about healthy diet and the benefits of enhanced physical activity. In addition, the intervention group received regular SMS messages, typically 2–3 per week, to provide additional education and motivation. The content of the messages provided by SMS was similar in both countries. The messages used in the previous study in India were modified and expanded. In the UK, a Patient and Public Involvement Group in the National Institute for Health Research Clinical Research Network provided input into the SMS message design and content. The messages provided tips, suggestions and positive reinforcement for healthy behaviours including goal setting, physical activity, dietary planning and personal strategies for lifestyle change. The message content was based on the TTM of behavioural change , a stage-based concept categorised by: precontemplation (not ready), contemplation (getting ready), preparation (ready), action and maintenance. We prepared an SMS database and grouped the messages to be appropriate for each TTM stage. There were 75–80 messages in each TTM stage. Messages were sent to the participants based on the TTM staging performed at each follow-up. The type and content of the messages were changed frequently to avoid repetition. Messages were delivered by commercial service providers. In India, messages were in English and in two local languages and were sent between 06:30 hours and 08:30 hours or after 18:00 hours, as preferred by the participants. In the UK, messages were sent at 10:00 hours on alternate days. The preferred time was ascertained during the follow-up visits so that the messages did not go unnoticed. Acceptability of SMS in the intervention group was assessed using a short questionnaire . Responses to questions about message content, frequency, ease of understanding, whether messages were considered a disturbance and whether they were perceived as helpful in improving lifestyle were scored as 0 or 1. A total score of 6 was the most acceptable and 0 the least. A modified acceptability questionnaire was used in the UK. Diet At baseline, individualised dietary recommendations were delivered to balance food intake and physical activity, and to aim for an appropriate body weight. Advice included: avoidance of simple sugars and refined carbohydrates, reduction of total fat intake (<20 g/day) and inclusion of increased fibre-rich food (e.g. whole grains, legumes, vegetables and fruits). Evaluation was performed using 24 h dietary recall, a method used previously in India . In the UK arm, a food frequency questionnaire was used for calculation of dietary energy intake and major food constituents , a method previously validated against 24 h dietary recall . Physical activity Participants who reported being sedentary or who undertook only light physical activity at baseline were advised to walk briskly every day for a minimum of 30 min. People who reported strenuous occupations or sufficient physical activity per day were advised to continue these activities. Physical activity was assessed by self-report using the RPAQ, which has previously been shown to provide a valid estimate of physical activity energy expenditure (PAEE), measured by the gold standard criterion method of doubly labelled water and time spent in different intensity levels . We also assessed physical activity objectively using triaxial accelerometry (ActiGraph GT3X+) which has also been validated against criterion methods . Quality of life The EQ-5D-3L version for India was administered to capture the individuals’ ‘perceived’ quality of life based on the effects of the health intervention . The questionnaire consists of five dimensions (mobility, self-care, usual activities, pain/discomfort, anxiety/depression) and the responses record three levels of severity (no problems/some or moderate problems/extreme problems) within each dimension. The EQ-5D-3L summary measure was calculated using a value set derived from a UK sample since there are no published value sets in Indian populations. At baseline, individualised dietary recommendations were delivered to balance food intake and physical activity, and to aim for an appropriate body weight. Advice included: avoidance of simple sugars and refined carbohydrates, reduction of total fat intake (<20 g/day) and inclusion of increased fibre-rich food (e.g. whole grains, legumes, vegetables and fruits). Evaluation was performed using 24 h dietary recall, a method used previously in India . In the UK arm, a food frequency questionnaire was used for calculation of dietary energy intake and major food constituents , a method previously validated against 24 h dietary recall . Participants who reported being sedentary or who undertook only light physical activity at baseline were advised to walk briskly every day for a minimum of 30 min. People who reported strenuous occupations or sufficient physical activity per day were advised to continue these activities. Physical activity was assessed by self-report using the RPAQ, which has previously been shown to provide a valid estimate of physical activity energy expenditure (PAEE), measured by the gold standard criterion method of doubly labelled water and time spent in different intensity levels . We also assessed physical activity objectively using triaxial accelerometry (ActiGraph GT3X+) which has also been validated against criterion methods . The EQ-5D-3L version for India was administered to capture the individuals’ ‘perceived’ quality of life based on the effects of the health intervention . The questionnaire consists of five dimensions (mobility, self-care, usual activities, pain/discomfort, anxiety/depression) and the responses record three levels of severity (no problems/some or moderate problems/extreme problems) within each dimension. The EQ-5D-3L summary measure was calculated using a value set derived from a UK sample since there are no published value sets in Indian populations. During the baseline and at each review, anthropometry, blood pressure (mean of two readings using sphygmomanometer), HbA 1c and serum lipid profile (total cholesterol, low density lipoprotein, HDL-cholesterol and triacylglycerols) were measured using standard enzymatic procedures with quality control. In India, the Ethics Review Committee of the India Diabetes Research Foundation and Dr. A. Ramachandran’s Diabetes Hospitals reviewed and approved the study protocol. An independent safety committee assessed study progress with unmasked data at 6 month intervals. In the UK, approval was from the Westminster Research Ethics Committee and site specific assessment (SSA) plus research and development (R&D) approvals were in place at each participating NHS Trust. Imperial College Academic Health Science Centre acted as the main sponsor. Delegated responsibilities were assigned to the participating NHS trusts. In the UK, the primary outcome was progression to diabetes as defined by international criteria for fasting plasma glucose or HbA 1c at any study review visit or in any healthcare setting. In India, information from study follow-up visits was available; thus, diabetes was defined on HbA 1c alone. The secondary outcomes were body weight and BMI, waist circumference, blood pressure, fasting plasma glucose, lipid levels, proportion achieving HbA 1c ≤42 mmol/mol (≤6.0%), acceptability of SMS, dietary variables, physical activity and quality of life. Based on results from the Indian Diabetes Prevention study , a 2 year risk of diabetes in the control group of 25% was assumed. With 2268 participants (1134 per group), the trial had 80% power to detect a relative reduction in risk of 20% as significant at the 5% level, allowing for approximately 4% withdrawals. Baseline characteristics were summarised by randomised group using mean and standard deviation (continuous variables), median and interquartile range (continuous variables with a skewed distribution), or frequency and percentage (categorical variables). The primary outcome was compared between intervention and control groups using a discrete-time proportional hazards model with a complementary log-log link function, since the data were interval censored , adjusted for country. The multiplicative interactions between randomised group and (1) country and (2) sex were tested using a Wald test. The HR for type 2 diabetes and 95% confidence intervals were reported for the overall trial population, and separately by country (UK/India) and sex, which were the only pre-specified subgroups. Secondary outcomes, measured at specified time points during follow-up, were analysed using linear regression with random intercepts at the individual level to allow for repeated measures, including the baseline value of the outcome, country, randomised group and time, to estimate an overall intervention effect, and then also using randomised group × time interaction, to estimate intervention effects at each follow-up time. Accelerometer wear time was included in the model for objectively measured physical activity outcomes. Outcomes with a skewed distribution were log-transformed prior to analysis. The trial was analysed on an intention-to-treat basis. The primary outcome was also analysed in a per-protocol population, which excluded individuals in whom the intervention was not successfully delivered. All analyses were pre-specified , and performed using Stata version 14.2 (Stata, College Station, TX, USA). Recruitment in India took place between 1 April 2012 and 1 November 2015, and in the UK between 1 June 2013 and 1 November 2017. Follow-up was for 2 years in both countries. The numbers assessed at pre-screening and screening are shown in Fig. . In the UK the recruitment took place in multiple primary care settings, mainly using the NHS Health Checks programme; routinely obtained data were scrutinised for eligibility and individuals were asked to participate if they fulfilled the entry criteria. Primary and secondary outcomes In total, 2062 participants were randomised (control: 1031; intervention: 1031). Baseline characteristics were similar in the two randomised groups (Table ). The mean age was 52.0 (SD 10.3) years, and 64.0% of the participants overall were men. During the 2 year follow-up period, 234 (22.7%) individuals in the control group and 216 (21.0%) in the intervention group developed diabetes. The cumulative percentage of individuals who developed diabetes at 6, 12 and 24 months in the control and intervention groups is shown in Fig. . There was no significant effect of the intervention on the primary outcome (HR for intervention vs control 0.89; 95% CI 0.74, 1.07; p = 0.22) (Fig. ). The overall intervention effects on the secondary outcomes are shown in Fig. and in Table . Confidence intervals around the estimated effects were wide and overlapped zero. Mean values of most outcomes changed little between baseline and any of the follow-up visits in either randomised group. For all of the secondary outcomes reported in Table , a maximum of 1.3% of individuals had missing values at baseline, except for work PAEE (22.6%), commuting PAEE (22.5%), each of the three ActiGraph physical activity measures (12.6%) and the EQ-5D-3L summary measure (41.1%). The percentages of individuals with missing values were similar in the two randomised groups. When estimating intervention effects using random intercepts linear regression, available data from all time points (including baseline) were included in the model. This assumes that any missing values at either baseline or another time point were missing at random. Within this trial, the correlation between total physical activity (ActiGraph, counts/min) and total PAEE (kJ kg −1 day −1 , RPAQ) was 0.28 at baseline and 0.32 at 24 months. Although body weight did not change significantly, there were reductions in estimated intakes of total energy, fat, carbohydrates and protein, and an increase in estimated fibre intake, as assessed by a self-report questionnaire in each group, and as shown in Table . The SMS acceptability questionnaire in India, where the median score out of 6 was 3, showed that messages were generally acceptable. Fewer than 5% of the participants reported that receiving the messages was a disturbance. In the UK the acceptability of SMS varied between 85% at 6 months to 82% at 24 months. Over the 2 years of follow-up the observed percentages developing diabetes in both intervention and control groups were higher in India (control: 30.3%; intervention: 26.0%) than in the UK (control: 12.6%; intervention: 14.3%). There was no clear evidence of differential effects of the intervention by country or sex (tests of multiplicative interaction: randomised group × country: p = 0.33; randomised group × sex: p = 0.12) (Fig. ). Analysis of results per protocol in the intervention and control arms In the per-protocol analysis, the overall results were similar to the intention-to-treat analysis for the primary outcome (HR for intervention vs control 0.95; 95% CI 0.79, 1.16; p = 0.63) and for the secondary outcomes. In total, 2062 participants were randomised (control: 1031; intervention: 1031). Baseline characteristics were similar in the two randomised groups (Table ). The mean age was 52.0 (SD 10.3) years, and 64.0% of the participants overall were men. During the 2 year follow-up period, 234 (22.7%) individuals in the control group and 216 (21.0%) in the intervention group developed diabetes. The cumulative percentage of individuals who developed diabetes at 6, 12 and 24 months in the control and intervention groups is shown in Fig. . There was no significant effect of the intervention on the primary outcome (HR for intervention vs control 0.89; 95% CI 0.74, 1.07; p = 0.22) (Fig. ). The overall intervention effects on the secondary outcomes are shown in Fig. and in Table . Confidence intervals around the estimated effects were wide and overlapped zero. Mean values of most outcomes changed little between baseline and any of the follow-up visits in either randomised group. For all of the secondary outcomes reported in Table , a maximum of 1.3% of individuals had missing values at baseline, except for work PAEE (22.6%), commuting PAEE (22.5%), each of the three ActiGraph physical activity measures (12.6%) and the EQ-5D-3L summary measure (41.1%). The percentages of individuals with missing values were similar in the two randomised groups. When estimating intervention effects using random intercepts linear regression, available data from all time points (including baseline) were included in the model. This assumes that any missing values at either baseline or another time point were missing at random. Within this trial, the correlation between total physical activity (ActiGraph, counts/min) and total PAEE (kJ kg −1 day −1 , RPAQ) was 0.28 at baseline and 0.32 at 24 months. Although body weight did not change significantly, there were reductions in estimated intakes of total energy, fat, carbohydrates and protein, and an increase in estimated fibre intake, as assessed by a self-report questionnaire in each group, and as shown in Table . The SMS acceptability questionnaire in India, where the median score out of 6 was 3, showed that messages were generally acceptable. Fewer than 5% of the participants reported that receiving the messages was a disturbance. In the UK the acceptability of SMS varied between 85% at 6 months to 82% at 24 months. Over the 2 years of follow-up the observed percentages developing diabetes in both intervention and control groups were higher in India (control: 30.3%; intervention: 26.0%) than in the UK (control: 12.6%; intervention: 14.3%). There was no clear evidence of differential effects of the intervention by country or sex (tests of multiplicative interaction: randomised group × country: p = 0.33; randomised group × sex: p = 0.12) (Fig. ). In the per-protocol analysis, the overall results were similar to the intention-to-treat analysis for the primary outcome (HR for intervention vs control 0.95; 95% CI 0.79, 1.16; p = 0.63) and for the secondary outcomes. Although mobile technology is being widely applied in clinical management of a variety of long-term chronic disorders and also in modifying behaviour patterns such as smoking , the number of randomised intermediate and long-term studies in prevention of diabetes is limited. In this 2 year RCT involving 2062 participants with prediabetes recruited in two countries, India and the UK, with different ethnic and socio-cultural backgrounds, we have shown that delivery of a behavioural intervention by mobile technology is feasible. However, there was a non-significant risk reduction in the rate of progression of diabetes in 2 years by lifestyle modification using SMS messages. A previous pilot trial of this intervention that we completed in India alone did find evidence of an effect. However, there are a number of differences between these trials that may explain the inconsistency in findings. First, the former study was conducted in male employees of major industries, whereas the current study included men and women recruited from the general population. It is unlikely that it is the inclusion of women, rather than men alone, that has led to this inconsistency, since a pre-defined subgroup analysis provided weak evidence that the intervention may yield benefit in women. In other diabetes prevention studies, a major sex effect has not been observed . It is more likely that the difference we observed is explained by the selected socio-cultural make-up of the population in the original study compared with the more population-based approach in the current study. Second, the former study was conducted at a time when SMS messaging was novel in India and it is possible that this novelty and the fact that the messages appeared to come from a healthcare organisation may have influenced their effect. More recently, SMS messaging in India has proliferated, not only for the purpose of personal messaging, but also for mass advertising. Thus, it is possible that in the more recent study the ‘apparently personal’ targeted messaging that was previously effective is now just part of a slew of such messages, and thus may not only be diluted in effect but could also, by being of a similar nature to advertising, be considered an irritant. Third, the way in which a high-risk prediabetes group was identified was fundamentally different. In the previous study we used the OGTT to define prediabetes and progression to diabetes. This test, and in particular the 2 h glucose, is highly variable and responsive to behaviour change , thus making it an excellent way to identify those at risk and the response to intervention. However, we sought not only to test the effectiveness of the intervention, but to do that in a way that was scalable. Unfortunately, the practicalities of the OGTT make it impossible to scale up to a mass intervention. Thus, we utilised the much more practical HbA 1c test which could theoretically be employed in a real-life intervention programme to classify prediabetes and progression to diabetes. Although HbA 1c has similar biological significance to other measures of glycaemia, for example, in terms of prediction of cardiovascular events , a possible disadvantage may be that, as an integrated measure of glucose control over a period of time, HbA 1c is less sensitive to behaviour risk factor change, which may partially explain the lower estimate of effect size in this study. Finally, this study, unlike the former study, was conducted both in the UK and in India. It is possible that there could be country differences in the response to such an intervention, for socio-cultural or other reasons. The UK participants were recruited from primary care centres and thus, by definition, were in contact with an organised system of healthcare and would potentially have greater awareness of the importance of healthy lifestyle behaviours. By contrast, the participants in India are likely to have had less access to primary care and thus potentially a lower pre-existing awareness of health-promoting behaviours and a greater potential to benefit from this form of individual-level targeted prevention strategy. Overall, the observed progression rate from prediabetes to diabetes was greater in India than in the UK. The rate in India is compatible with that in our previous Indian study and the approximately 50% lower progression rate in the UK is consistent with other recent UK studies . Previous analysis of the Diabetes Prevention Program (DPP) intervention within a single country has shown no differences in the risk of progression to diabetes from prediabetes between ethnic groups . Our study shows that the rate of progression is markedly different between countries. We were not able in a pre-specified analysis to demonstrate significant differences in any intervention effect between the two countries, but this analysis may be under-powered to investigate differences in a low effect size. The low incidence of diabetes in the UK arm of this trial may have limited our ability to detect an effect of the intervention. Nor were we able to demonstrate significant differences in the secondary endpoints. Some of the apparent improvements that were observed in self-reported dietary components may be explained by reporting bias. The results of this trial need to be set in the context of results of intervention evaluations elsewhere. In a recent trial in Denver, USA, Fischer et al evaluated text messaging as an aid to achieving weight loss in individuals with prediabetes . Over 12 months, a clinically significant benefit in terms of weight loss was observed in the intervention group, but at 1 year HbA 1c levels did not differ between the groups. A similar impact on weight was observed in a 12 month trial of a low-intensity lifestyle programme in Australian women , but although this intervention included monthly text messages on healthy behaviour, these messages were delivered in addition to phone coaching and provision of a programme manual, making it difficult to isolate the effect of the messages alone. The utility of SMS in improving adherence to antiretroviral therapy and smoking cessation has also been reported . In a recent study in Bangladesh, a community-based intervention with facilitator-led group meetings was effective in preventing type 2 diabetes when an SMS-based intervention alone was not . These studies and our own are compatible with the conclusions from a recent systematic review of electronically delivered weight loss programmes that electronic delivery of lifestyle advice and motivation alone may be less effective than when supplemented with remote counselling or counselling in person. Cultural differences may also influence outcome and variability in results , although no major differences were observed in our study between effects in India and the UK. Future studies should be powered to detect small intervention effects which may not be meaningful at an individual level, but which might be meaningful if scaled across a population level. We would also suggest that studies should be established to investigate more thoroughly the contextual factors that may influence the effectiveness of this type of intervention. |
Metabolic Dysfunction-Associated Fatty Liver Disease and Fibrosis Status in Patients with Type 2 Diabetes Treated at Internal Medicine Clinics: Türkiye DAHUDER Awareness of Fatty Liver Disease (TR-DAFLD) Study | ff9c1d10-306e-4503-aedf-3717cd367f9a | 11363181 | Internal Medicine[mh] | Ultrasound (US) examination plus fibrosis-4 (FIB-4) index calculation seems to be a useful method in case-finding for metabolic dysfunction-associated fatty liver disease (MAFLD) and identification of advanced fibrosis risk in internal medicine outpatients with type 2 diabetes (T2D). However, this simple imaging-scoring algorithm, despite enabling the diagnosis of MAFLD in ~70% of patients and the risk for advanced fibrosis in ~25% of those with MAFLD, had been applied only in one-third of patients in our cohort. The possible underdiagnosis of MAFLD in T2D patients treated at internal medicine clinics seems to indicate that a considerable proportion of T2D patients were living with an unknown status regarding the MAFLD and advanced fibrosis risk. Our findings emphasize a need for increased awareness among clinicians on the high prevalence and significant hazards of MAFLD, necessitating its timely diagnosis in T2D patients, and the convenience of US plus FIB-4 index as an easy-to-use strategy in this regard. Type 2 diabetes (T2D) and fatty liver disease share common pathophysiological mechanisms and their co-existence is mutually detrimental, as each condition increases the development and progression of the other. , Non-alcoholic fatty liver disease (NAFLD) refers to fatty infiltration of the liver in the absence of significant alcohol consumption and other chronic liver diseases. , Besides its strong link to obesity, T2D, and intestinal microbiome, NAFLD is also regarded as a multisystem disease associated with both liver-related [liver cirrhosis and hepatocellular carcinoma] and extrahepatic [i.e., increased risk of cardiovascular disease and chronic kidney disease complications. - Recently, based on the crosstalk between NAFLD and metabolic dysfunction, a change of terminology from NAFLD to metabolic dysfunction-associated fatty liver disease (MAFLD) has been proposed by a panel of international experts, which downplays the importance of alcohol in the definition of NAFLD and emphasizes the metabolic risk factors underlying the disease progression. - Accordingly, MAFLD is defined by the presence of fatty liver (hepatic steatosis) plus at least 1 of the 3 criteria, including T2D, overweight/obesity, or evidence of metabolic dysfunction. Hence, in contrast to NAFLD which is a diagnosis of exclusion, MAFLD diagnosis does not require the exclusion of excessive alcohol consumption or other chronic liver diseases. , All T2D patients with hepatic fat content >5% identified by radiological imaging modalities, biological scores with reasonable accuracy or biopsy are considered to have MAFLD. , Given the limitations of clinical/laboratory-based risk scores and the invasive nature of liver biopsy, imaging is considered the mainstay tool in the MAFLD diagnosis, while hepatic ultrasound (US) has become the guideline-recommended first-line method for the screening and diagnosis of MAFLD due to widespread availability, relatively low cost, and overall safety. - Although there is no universally accepted screening approach for patients at high risk for MAFLD, most guidelines recommend the case-finding (screening) for MAFLD in all high-risk patients (i.e., diabetes, metabolic syndrome, obesity) and agree that US can be useful in screening for MAFLD (in detecting moderate to high levels of steatosis) and also recommend the use of simple scoring systems [i.e., fibrosis-4 (FIB-4) index] in those diagnosed with MAFLD to rule out significant or advanced liver fibrosis. - Screening T2D patients for MAFLD is considered a cost-effective strategy, given that T2D patients with concomitant MAFLD represent a highly prevalent and an exceptionally high-risk group within the MAFLD population. , However, despite the growing epidemic of MAFLD, in parallel with the epidemics of obesity and diabetes, and the high prevalence and serious clinical implications of MAFLD in patients with T2D, there is limited awareness of and familiarity with the disease among clinicians providing diabetes care. , , , - This seems to be the major challenge given the majority of T2D patients with MAFLD are asymptomatic at early stages where internal medicine and endocrinology specialists may play a pivotal role in recognition of the disease as they assess these patients at the frontline. , , In the setting of T2D, presence of MAFLD simply requires the demonstration of >5% hepatic fat without the nuisance of ruling out other chronic liver diseases, which might actually facilitate the diagnosis of the disease by the non-hepatologist. , , Hence, improved awareness of clinicians about the risk and clinical relevance of MAFLD in the setting of T2D is considered to be of utmost importance in fighting this global health challenge, by enabling early identification and appropriate and timely intervention of high-risk MAFLD patients, since even the advanced fibrosis stage is considered potentially reversible upon reversal of the initial injurious stimuli. , Therefore, within the context of an awareness-raising project conducted in collaboration with the DAHUDER (Society of Internal Medicine Specialists), this cross-sectional TR-DAFLD (TüRkiye DAHUDER Awareness of Fatty Liver Disease) study aimed to provide a snapshot of the current MAFLD and advanced fibrosis status in a cohort of T2D patients treated at internal medicine clinics across Türkiye, via a simple algorithm based on US imaging and FIB-4 index. Study Population A total of 6283 patients with T2D (mean ± SD age: 57.1 ± 11.9 years, 61.1% were females) for at least 3 years were included in this retrospective multicenter TR-DAFLD study conducted between February 2023 and April 2023 at 17 internal medicine clinics across Türkiye in collaboration with the DAHUDER. T2D patients who presented to internal medicine outpatient clinics for a routine control visit and agreed to participate in the detailed interview performed by the physician during the visit were included in the study on the day of outpatient control visit. Patients with excessive alcohol consumption or other chronic liver diseases were not excluded, given that MAFLD diagnosis does not require the exclusion of these conditions. However, patients with specific liver diseases such as hepatocellular carcinoma, hepatic cirrhosis, and biliary disease were excluded from the study. Although 6297 patients were initially enrolled, 6283 patients comprised the final study population with the exclusion of 14 patients who did not give consent to use their personal data. Written informed consent was obtained from each subject. The study was conducted in accordance with the ethical principles stated in the Declaration of Helsinki and approved by the institutional ethics committee of Antalya Training and Research Hospital (approval number: 1/11, date: January 12, 2023). Assessments Details on disease background were obtained via history taking, and the acquired information was combined with US findings and laboratory parameters. Overall, patient demographics (age, gender), duration of diabetes, latest glycated hemoglobin (HbA1c) value and the presence of a US examination (including liver parenchyma assessment) performed for any reason within the last 3 years as well as the US-confirmed MAFLD rates were recorded in each patient. In those with US-confirmed MAFLD, the laboratory findings on the day of US and the referral rates (percentage of patients referred to gastroenterology for further investigation) were recorded, while FIB-4 index was also calculated via the following equation: age × aspartate aminotransaminase (AST) [IU/L]/platelet count [ ×100 000/L)] × square root of (alanine aminotransaminase (ALT) [IU/L]). Patients with FIB-4 index ≥1.3 were considered to have the advanced liver fibrosis risk. Statistical Analysis Statistical analysis was performed using the Statistical Package for the Social Sciences® Statistics for Windows, version 25.0 (IBM Corp., Armonk, NY, USA). Descriptive statistics were reported, including mean ± standard deviation, median, interquartile range (IQR), and minimum-maximum values for continuous variables and percentages for categorical variables. A total of 6283 patients with T2D (mean ± SD age: 57.1 ± 11.9 years, 61.1% were females) for at least 3 years were included in this retrospective multicenter TR-DAFLD study conducted between February 2023 and April 2023 at 17 internal medicine clinics across Türkiye in collaboration with the DAHUDER. T2D patients who presented to internal medicine outpatient clinics for a routine control visit and agreed to participate in the detailed interview performed by the physician during the visit were included in the study on the day of outpatient control visit. Patients with excessive alcohol consumption or other chronic liver diseases were not excluded, given that MAFLD diagnosis does not require the exclusion of these conditions. However, patients with specific liver diseases such as hepatocellular carcinoma, hepatic cirrhosis, and biliary disease were excluded from the study. Although 6297 patients were initially enrolled, 6283 patients comprised the final study population with the exclusion of 14 patients who did not give consent to use their personal data. Written informed consent was obtained from each subject. The study was conducted in accordance with the ethical principles stated in the Declaration of Helsinki and approved by the institutional ethics committee of Antalya Training and Research Hospital (approval number: 1/11, date: January 12, 2023). Details on disease background were obtained via history taking, and the acquired information was combined with US findings and laboratory parameters. Overall, patient demographics (age, gender), duration of diabetes, latest glycated hemoglobin (HbA1c) value and the presence of a US examination (including liver parenchyma assessment) performed for any reason within the last 3 years as well as the US-confirmed MAFLD rates were recorded in each patient. In those with US-confirmed MAFLD, the laboratory findings on the day of US and the referral rates (percentage of patients referred to gastroenterology for further investigation) were recorded, while FIB-4 index was also calculated via the following equation: age × aspartate aminotransaminase (AST) [IU/L]/platelet count [ ×100 000/L)] × square root of (alanine aminotransaminase (ALT) [IU/L]). Patients with FIB-4 index ≥1.3 were considered to have the advanced liver fibrosis risk. Statistical analysis was performed using the Statistical Package for the Social Sciences® Statistics for Windows, version 25.0 (IBM Corp., Armonk, NY, USA). Descriptive statistics were reported, including mean ± standard deviation, median, interquartile range (IQR), and minimum-maximum values for continuous variables and percentages for categorical variables. Baseline Characteristics Mean age of patients was 57.1 years (range, 18-99 years), and females comprised 61.1% of the study population. Median duration of diabetes was 9 years (range, 5-13 years) and the latest HbA1c values were 7.6% (range, 6.6-9.2%) . Ultrasound Examination and Metabolic Dysfunction-Associated Fatty Liver Disease Rates Overall, 1731 (27.6%) of 6283 patients were identified to have US examination, and MAFLD was diagnosed in 1211 (69.9%) of these cases. Also, 831 (48.0%) of 1731 US examinations were performed specifically for suspected MAFLD, which revealed the MAFLD diagnosis in 625 (75.2%) cases . Laboratory Findings in Patients with Ultrasound-Confirmed Metabolic Dysfunction-Associated Fatty Liver Disease Laboratory findings on the day of US in patients with US-confirmed MAFLD (n = 1211) are summarized in . Glycated hemoglobin levels were median 7.7% (IQR: 6.7-9.4%), while mean ± SD platelet counts were 284.0 ± 89.0 10 3 /µL. Median (IQR) AST and ALT levels were 21 (16-29) IU/L and 23(16-37) IU/L, respectively. Median (IQR) FIB-4 index in patients with US-confirmed MAFLD was 0.93 (0.67-1.29), and advanced fibrosis risk (FIB-4 index ≥1.3) was evident in 290 (24.4%) patients . Referral Rates in Patients with Ultrasound-Confirmed Metabolic Dysfunction-Associated Fatty Liver Disease Overall, referral for further investigation upon detection of MAFLD on US was performed in 185 (15.5%) of 1190 patients with available data. Referral rates in patients at risk of advanced fibrosis were 17.9% . Mean age of patients was 57.1 years (range, 18-99 years), and females comprised 61.1% of the study population. Median duration of diabetes was 9 years (range, 5-13 years) and the latest HbA1c values were 7.6% (range, 6.6-9.2%) . Overall, 1731 (27.6%) of 6283 patients were identified to have US examination, and MAFLD was diagnosed in 1211 (69.9%) of these cases. Also, 831 (48.0%) of 1731 US examinations were performed specifically for suspected MAFLD, which revealed the MAFLD diagnosis in 625 (75.2%) cases . Laboratory findings on the day of US in patients with US-confirmed MAFLD (n = 1211) are summarized in . Glycated hemoglobin levels were median 7.7% (IQR: 6.7-9.4%), while mean ± SD platelet counts were 284.0 ± 89.0 10 3 /µL. Median (IQR) AST and ALT levels were 21 (16-29) IU/L and 23(16-37) IU/L, respectively. Median (IQR) FIB-4 index in patients with US-confirmed MAFLD was 0.93 (0.67-1.29), and advanced fibrosis risk (FIB-4 index ≥1.3) was evident in 290 (24.4%) patients . Overall, referral for further investigation upon detection of MAFLD on US was performed in 185 (15.5%) of 1190 patients with available data. Referral rates in patients at risk of advanced fibrosis were 17.9% . Our findings in a retrospective cohort of 6283 T2D patients revealed insufficient awareness among internists regarding the screening or case-finding strategy for MAFLD in the setting of T2D. Less than one-third of T2D patients had US examination during their follow-up at internal medicine clinics, which confirmed the presence of MAFLD in 69.9% of cases. Advanced fibrosis risk (FIB-4 index ≥1.3) was evident in 24.4% of patients at the time of US-confirmed MAFLD, while the referral for further investigation was performed in 15.5% of patients. Türkiye is considered a risky region in terms of NAFLD burden with an estimated 30% prevalence of NAFLD (range, 48.3%-60.1%), which is expected to further increase with rising prevalence of obesity and T2D. The transabdominal ultrasonography findings from the recent Cappadocia Cohort Study of Türkiye in 2797 subjects (14% with T2D) revealed a high prevalence of hepatic steatosis (60.1%) emphasizing that Türkiye is one of the leading countries in the world for NAFLD. The rates of US-confirmed MAFLD (69.9%) and advanced fibrosis risk (24.4%) in our patients are in line with consideration of MAFLD to affect over half of T2D patients (up to 75%-90%, possibly), and presence of histological hepatic fibrosis alongside steatosis in approximately 1 in 5 individuals with MALFD. , , , In a meta-analysis of studies in T2D patients, the global prevalence of MAFLD by US imaging was estimated to be 55.5%, while NASH (i.e., nonalcoholic steatohepatitis) and advanced fibrosis rates on biopsy were 37.3% and 4.8%, respectively. Nonetheless, despite the high prevalence and significant extra-hepatic complications of MAFLD in T2D patients, it is considered to be usually overlooked in clinical practice. , , , - Although most guidelines such as American Association of Clinical Endocrinology and American Association for the Study of Liver Diseases, European Association for the Study of the Liver, European Association for the Study of Diabetes and European Association for the Study of Obesity clinical practice guidelines and World Gastroenterology Organization global guidelines recommend a screening or case-finding strategy for MAFLD for at-risk patients including those with T2D, the implementation of these screening strategies in clinical practice is strongly limited by controversies regarding the diagnostic tests and treatment options for MAFLD. - , - More importantly, due to low awareness and poor recognition of MAFLD among clinicians, many T2D patients living with MAFLD are considered to be unaware of their fibrosis stage, and those with advanced fibrosis remain at risk of advanced liver disease due to delayed referral to specialists for evaluation and care. , Notably, the MAFLD and advanced fibrosis risk findings achieved in our cohort reflect the current status only in one-third of the overall study population, indicating that most patients with T2D had no US examination during their routine follow-up and thus were living with an unknown status regarding the MAFLD and advanced fibrosis risk. Hence, our findings indicate the possible underdiagnosis of MAFLD in T2D patients treated at internal medicine clinics, emphasizing a need for increased awareness among clinicians regarding the high prevalence of MAFLD and risk of advanced fibrosis in T2D patients, as well as the likelihood of US imaging and FIB-4 index to be used as a simple screening strategy in these patients. Indeed, as surveillance for liver disease complications is recommended only for patients with severe fibrosis, application of more specific criteria for risk prediction (i.e., FIB-4 and US-determined indices) for referring patients to a hepatologist is considered a cost-effective fatty liver referral pathway, enabling more reasonable referral rates consistent with the underlying advanced fibrosis. , , Otherwise, the process may reveal very high referral rates (33-85%) when referral was applied also for T2D patients with less severe liver disease, despite the physician can continue the standard diabetes care including lifestyle modification in these patients with no need for further referral. , In our cohort, with use of these stringent criteria (US plus FIB-4 index), 24.4% of MAFLD patients were found to be at risk of advanced fibrosis (FIB-4 scores ≥3) and the overall referral rate was 15.5%. The advanced fibrosis risk and referral rates in our study should be interpreted in the light of the possibility of including a larger population of patients at high risk of liver disease progression by definition of MAFLD. The likelihood of underestimating the mild disease in the present study should also be considered, given the exclusion of newly diagnosed T2D patients and the low performance of US for the detection of mild steatosis, since it necessitates the presence of steatosis in at least 12.5%-33% of hepatocytes to detect fatty liver with optimal accuracy. , , , In a recent study, based on the data from the U.S. National Health and Nutrition Examination Survey in 6727 T2D patients, MAFLD was identified in 4982 patients, which was classified as MAFLD/NAFLD(−) in 2032 patients and MAFLD/NAFLD in 2950 patients. The new definition (MAFLD) was reported to increase the fatty liver diagnosis in T2D patients by 68.9%, while patients classified as MAFLD/NAFLD(−) were also found to be at a higher risk of major adverse cardiovascular events, advanced fibrosis, all-cause and cardiovascular-related mortality compared to those classified as MAFLD/NAFLD. Accordingly, MAFLD not only identifies more patients due to no exclusion of other chronic liver diseases but also seems to be better in identifying patients at risk of liver and cardiovascular complications, which is considered to indicate a need for better risk stratification to prevent an over-inclusion of fatty liver. , Although there are no pharmacological agents approved specifically for treating MAFLD, lifestyle modification, particularly weight reduction via dietary and exercise strategies or bariatric surgery, in addition to statins and some antidiabetic medications (i.e., pioglitazone, glucagon-like peptide 1 receptor agonists and SGLT2 (i.e., sodium-glucose cotransporter-2) inhibitors) with proven benefits in overall improvements in liver histology and hepatic fibrosis are recommended in T2D patients with MAFLD. , , , , , Thus, MAFLD is suggested to be considered an emerging diabetic complication and to be timely diagnosed and systematically evaluated by proactive participation of all health care providers taking care of T2D patients, as in other conventional diabetes-related complications. , , Besides the low awareness among the clinicians on MAFLD, many factors have been implicated in the underdiagnosis of MAFLD in clinical practice, such as the knowledge gaps regarding the risk-factors, diagnosis, and management approaches, the lack of tools to support clinical decision making, and the dearth of national strategies, guidelines, or action plans to address the increasing prevalence of MAFLD. , , , - Therefore, improved awareness (via continuing education programs, awareness campaigns, improved guidelines, and referral protocols) among all important stakeholders (primary care physicians, specialists, and health policy makers) is emphasized regarding the addition of MAFLD as another frequent end-organ complication of T2D necessitating timely diagnosis and intervention. , , - Given that international guidelines increasingly advocate multidisciplinary approaches for patients with MAFLD, the strategies to fight against the underestimation of the disease burden and lack of awareness should also consider the potential interdisciplinary differences in awareness, knowledge and management of MAFLD and thus specifically target the medical specialties where the largest improvements could be made. , , The major strength of this study seems to be the potential generalizability of our results given the inclusion of 6283 T2D patients from 17 internal medicine clinics across Türkiye. However, certain limitations should be considered. First, due to the cross-sectional design, it is impossible to establish any cause-and-effect relationships. Second, since this is an awareness study regarding the US examination and MAFLD diagnosis rates in T2D patients, analysis of patient and treatment characteristics (i.e., family history, concomitant obesity, viral hepatitis, treatment changes in those with MAFLD/advanced fibrosis) was not within the scope of the study. Third, the unknown MAFLD status in most patients due to the absence of US imaging is another potential limitation. Fourth, the exclusion of newly diagnosed T2D patients and the use of US as the sole imaging modality might have resulted in an underestimated diagnosis of mild disease. Nevertheless, this study was conducted in the context of an awareness-raising project to provide a snapshot of the current MAFLD status among T2D patients treated at internal medicine clinics across Türkiye. In conclusion, our findings revealed the favorable utility of US plus FIB-4 index in case-finding for MAFLD and identification of advanced fibrosis risk with reasonable referral rates in T2D patients treated at internal medicine clinics. However, this simple imaging-scoring algorithm, despite enabling the diagnosis of MAFLD in ~70% of patients and the risk for advanced fibrosis in ~25% of those with MAFLD, had been applied only in one-third of patients and with an indication of suspected MAFLD only in half of them, indicating that most patients with T2D were living with an unknown status regarding the MAFLD and advanced fibrosis risk. Hence, the possible underdiagnosis of MAFLD in T2D patients treated at internal medicine clinics emphasizes a need for increased awareness among clinicians on the high prevalence and significant hazards of MAFLD, necessitating its timely diagnosis in T2D patients, and the convenience of US plus FIB-4 index as an easy-to-use strategy in this regard. |
External post-mortem examination in virtual reality—scalability of a monocentric application | c82b6348-2abb-4275-b998-e92b489ea36c | 11306363 | Forensic Medicine[mh] | In several countries, including Germany, physicians are required to perform external post-mortem examinations regardless of their specialization . The quality of these medical examinations is often intensively and sometimes controversially discussed, particularly due to structural conditions and occasionally glaring faults. . One potential solution to this issue could be to intensify the training of medical students with a stronger practical focus. However, a review of the current situation in forensic medical teaching reveals that, despite several published reports on new teaching, learning, and examination methods regarding medical external post-mortem examinations , conventional formats still prevail in Germany . Therefore, the challenge remains to foster the establishment of practice-oriented teaching and learning methods, develop corresponding exam formats and distribute these concepts to further sites. At the Institute of Forensic Medicine, a faculty of Martin Luther University Halle-Wittenberg's medical department (Germany), the topic of external post-mortem examination comprises of a lecture (90 min) and practical exercises (180 min). Until 2014 students in their 8th semester acquired practical skills exclusively on deceased individuals during a seminar in a mortuary. To improve the medium- and long-term quality of the examinations as well as to enlarge the practical training a cooperation with the Dorothea Erxleben Learning Center Halle was found. Initially, a skills lab station and an e-learning module were established, where students could practice filling out death certificates in 10 different external post-mortem examination scenarios . Corresponding to this training and the practical exercises with real corpses, two forensic medicine OSCE stations (Objective Structured Clinical Examination) were developed. These included the practical training of an external post-mortem examination on a simulation mannequin and the completion of a death certificate in the presence of an examiner. Shortly afterwards the second station was transferred to a computer-based format . Another purpose of the simulation mannequin modified for external post-mortem examination was to test the corresponding practical skills for the medical final examination during the COVID-19 pandemic . In a first collaboration with the forensic medicine institute and the Medical Interprofessional Training Center (MITZ) of the Technical University of Dresden (TUD), the simulation mannequins were also utilized for training of an external post-mortem examination for medical students at MITZ . The scenario was supplemented by actors simulating the relatives of the deceased. Furthermore, in Dresden, this method was also applied to a training and assessment of external post-mortem examination for police officers, opening up a new field of application. However, a critical review of the applied methods revealed that the comprehensive setting at the scene of the body's discovery could not always be adequately considered or simulated. One way to address this problem is the reconstruction of real cases using model rooms. As these environments require additional preparations and personnel expenditure cost-effective alternatives had to be determined. It was noted that especially clinical disciplines have already introduced new digital teaching and learning environments for several years, particularly using virtual reality (VR) technology. Moreover, virtual environments are said to have resource-saving and thus cost-efficient characteristics, which can simulate real scenarios in a safe environment . With regard to this development a virtual external post-mortem examination project was started in 2018 based on a close collaboration between the Institute of Forensic Medicine Halle and the Dorothea Erxleben Learning Center Halle. The virtual reality application was created in Unity Engine (Unity Technologies, San Francisco, CA, USA, deployed for Oculus Quest VR glasses, Menlo Park, CA, USA) and the main objective was to provide a more realistic and detailed discovery site of a corpse . From there on the virtual body examination scenario was subjected to an initial evaluation with a limited number of teachers and medical students in their practical year . Subsequently, a further case scenario was developed, along with additional modifications to the process and technical implementation. Students were trained with two virtual cases. As the simulation focuses on typical circumstances of discovery the training is based in a domestic environment. Each scenario comprises of a deceased person with post-mortem signs describing a specific cause of death. Players are expected to perform a body examination by using an integrated toolbar which allows for the complete undressing of the corpse using scissors, measuring the core body temperature with a thermometer, examining eyes and mouth with a pair of tweezers and a closer inspection of livor mortis or other external abnormalities using a magnifying glass function. In addition to this, players have the opportunity to investigate the virtual apartment and the belongings of the inhabitant. Further hints such as an identity card for identification purposes, provide necessary information for processing the case. Afterwards students complete a death certificate in the virtual environment. The seminar is concluded with a final discussion and feedback from a tutor . Further evaluations were then carried out with students in their practical year and doctors in advanced training . In light of the need for efficient and widespread resource use and the exploitation of synergistic potentials, interprofessional and interdisciplinary teaching concepts should be transferred across locations . Therefore, the question arose as to whether, the successful establishment at the Halle site could be repeated at other locations. Consequently, in 2022, the VR external post-mortem examination was modified in Halle and the didactic concept was altered and implemented at the MITZ Dresden as part of the virTUos-project at TUD funded by “Stiftung Innovation in der Hochschullehre”. In order to pursue a successful transfer of the application to the curricula of MITZ which is part of the medical faculty Dresden, a comprehensive modification for integration into the existing training concept of forensic medicine and the local conditions of MITZ was necessary. At the Dresden location, thanatology has so far been covered in a lecture (90 min) and a practical course on external post-mortem examination (90 min) in seminar group size (about 20 students) in the 5th semester, plus the opportunity for training in medical corpse examination using a simulation mannequin (45 min) in the 8th semester. Furthermore, the optional elective "Medicine and Law" is provided during the 2nd semester, which also includes a practical exercise on external post-mortem examination. During the winter semester of 2022/23, the virtual external post-mortem examination was offered as an elective course in a peer-teaching format for students in their 5th semester, for the first time. In that respect it was advantageous that a system update to SteamVR in 2022 enabled the serious game to be available across device manufacturers and plattforms. SteamVR is a free plugin provided by Valve Corporation on the video game distribution service Steam and offers a plug-and-play solution for SteamVR-compatible devices, including the HTC Vive and Meta Quest 3 and Pico Neo 4. As part of the alterations for the new Dresden site, the user interface was also fundamentally revised. While the possible functions remained the same as those in Halle, the users were now guided step by step through the process of filling out the death certificate. Information collected in the game appeared in the smart menu and could be linked with the corresponding sections of the death certificate. In contrast to the first evaluation phase in Halle, in which the course was compulsory and only took place on site, the optional course in Dresden was offered using the "flipped classroom" method and consisted of two phases. As preparation for the event an e-learning module was tailored, comprising a summary of the required theoretical learning content, previously imparted during lectures and practical training, as well as a checklist for filling out a death certificate and practical repetition exercises. During the on-site training, each group comprised of four students. At the beginning of the one-hour event, trained tutors introduced the topic and explained the technical conditions. While the students took turns performing the virtual external post-mortem examination, the progress was documented by the other participants using a checklist. This was followed by a work phase and a reflection phase. The experiences and insights gained in the virtual environment were recapitulated and then transferred to the death certificates. The pilot stage was evaluated using a uniform, standardized evaluation form. Based on 19 items previously used at the Halle site the evaluation was expanded to 31 items and categorized into six areas: prior knowledge (4 items), organization and structure (6 items), learning content (5 items), guidance by the tutors (4 items), technical implementation of Virtual Reality (6 items), and overall assessment (6 items). 22 items were rated using a Likert scale. Level 1 indicated full agreement, while the highest level 5 symbolized fundamental disagreement. On the 5-point scale, levels 1 and 2 were summarized as 'full and predominant agreement', as previously in Halle . For two items, specific details (semester of study, previously completed courses) were requested, while for the other six items, respondents usually had three response categories available: 'just right', 'too little', and 'too much'. Finally, a free text field allowed participants to state particularly critical or positive aspects. A total of 73 students participated in the pilot stage, 63 of whom completed an evaluation form (response rate of 86%). However, not all of the participants rated each item, which is indicated in the following by the varying total number of evaluations. The participants (where specified) predominantly belonged to the 7th semester (n = 20), followed by the 8th semester (n = 17), 9th semester (n = 15), and a few from the 6th (n = 3) and 5th semester (n = 4). At the time of the event, most students already had prior knowledge of thanatology. 58 participants (85.7%) had attended the lecture, and 43 students (68.3%) had completed the practical course in corpse examination. Despite this background, only 25.4% of the participants (n = 16) felt completely or mostly confident in conducting a practical corpse examination before the learning event (Fig. ). As for completing a death certificate, this percentage was 34.9% (n = 22). The e-learning module provided for preparing for the virtual external post-mortem examination was assessed as completely or mostly positive in both scope (95.1%, n = 58 of 61) and content (93.1%, n = 54 of 58) by the majority of participants. Only four participants reported technical issues during the module. Approximately four-fifths (87.3%, n = 55) of the students found the information about the procedure of the actual teaching event to be completely or mostly sufficient. All participants deemed the simulation of the external post-mortem examination in small groups of four students each to be optimal. The one-hour duration of the teaching event was considered too short by just over a quarter of the participants (28.6%, n = 18), while a significant majority (69.8%, n = 44) found the timing to be appropriate. The timeframe was regarded as rather too long by only one participant. Both the technical (93.4%, n = 57 of 61) and subject-specific instruction (96.8%, n = 61) provided by the tutors were rated as just right. Almost all students (98.4%, n = 62) felt encouraged to think independently and were appropriately challenged by the tutors according to their knowledge level. Approximately three-quarters of the students (79%, n = 49 of 62) had no or only minimal prior experience with VR technology. As a result, only half of the participants (53.3%, n = 32 of 60) reported that they found the practical operation of the VR headset to be easy or very easy. Among the participants with previous experience in VR, just under half reported having suffered from nausea and dizziness ("motion sickness") at least occasionally during use (48.3%, n = 14 of 29). A slight majority of the students (53.3%, n = 32 of 60) considered the technical implementation of the scenario to be realistic. Due to the course, 93.1% (n = 58 of 62) felt more confident in conducting a practical post mortem examination, an increase from the initial rate of 25.4%. (Fig. ) When it came to filling out the death certificate, there was a clear improvement from 34.9% to 96.8% (n = 61 of 63) in feeling completely or mostly competent (Fig. ). Nearly all students (98.4%, n = 62 of 63) completely or mostly agreed that aspects of forensic medicine should be repeated throughout their studies (Fig. ). Similarly, 96.8% (n = 61 of 63) believed that simulated external post-mortem examination using VR technology were completely or mostly suitable for reinforcing their forensic skills and competencies. Additionally, the virtual external post-mortem examination was seen as a useful complement to existing teaching events by 91.7% (n = 55 of 60) of the students (Fig. ). The technical implementation of the teaching event was judged as mostly or completely successful by 91.8% (n = 56 of 61), and the content approval rate was even higher at 96.8% (n = 61 of 63). The teaching event received a top grade of "1" from 68.3% (n = 43 of 63) of the participants, 28.6% (n = 18 of 63) rated it as "2", and 3.2% (n = 2 of 63) as "3". Finally, 96.9% (n = 61 of 63) of the students indicated they would very likely or likely recommend the course to others. Despite the high approval rates for the newly introduced learning concept, 91.6% (n = 55 of 60) of the participants expressed that training on a real corpse is still indispensable (Fig. ). The issue of errors in external post-mortem examinations, extending to missed homicides, is exceedingly complex . Several aspects can classify the causes of these errors: inherent limitations of the external examination itself, structural factors such as legal regulations, situational factors like family expectations for a natural death determination or challenging conditions such as decomposition, police biases favoring natural death cases, and medical reasons . These medical errors encompass a broad range, from incomplete examinations without inspection of body orifices to overlooking subtle or evident signs of external violence, careless identification of the body, misjudgments regarding the time of death, cause of death, manner of death, and the incorrect formation and classification of causal chains. . From the perspective of forensic medicine, an important contribution to reducing this issue can be made by providing more intensive and practice-oriented training for medical students, as well as continuing education for doctors obliged to conduct external post-mortem examinations. In recent years, several concepts for corresponding teaching, learning, and examination methods have been introduced, but they have not yet become widespread. Schmeling et al. described a web-based e-learning program in the form of a "Click and Point Adventure" for training in external post-mortem examination, which also takes into account the environment of the corpse. Ebert et al. , replicated a crime scene by reconstructing it in virtual reality, while Koller et al. used the VR environment for the specific tasks of external forensic examination and measurement of injuries. Another analog variation is the already described training with simulation mannequins and the additional participation of actors as simulation patients , which also represents a comprehensive setting for corpse inspection. In all mentioned scenarios, practical training for external post-mortem examination is emphasized, and the teaching of how to fill out a death certificate is omitted. Also current practical exams reflect this strategy, where the actual performance of the external post-mortem examination and the filling out of the death certificate are separated. Especially during exams, such as OSCE exams (Objective Structured Clinical Examination), the tight time schedules are apparent which lead to considering the corpse environment only marginally . This assessment is likely to apply equally to the integration of this topic into OSPE formats (Objective Structured Practical Examination) . In order to largely cover the spectrum of potential medical sources of error, both the practical external post-mortem examination with consideration of the environment and the completion of the death certificate in a case scenario are intertwined in the virtual external post-mortem examination . Throughout the first evaluation phase at the Halle site, the differences and potential advantages over previous projects were demonstrated. In the current study, alongside an updated inventory of similar projects and a comparison to these applications, the main focus was to examine to what extent such a learning concept can also be successfully disseminated across different locations. Likewise, the limitations and boundaries of the method must also be critically discussed. Before the virtual external post-mortem examination, almost all students had attended the lecture on thanatology, and over two-thirds had completed the external post-mortem examination practicum. As a result of these courses, students generally assessed their own skills and competencies in conducting practical external post-mortem examinations and filling out death certificates rather conservatively, as it had been observed previously at the Halle site . This assessment supports the need to establish additional teaching and learning methods for the medical external post-mortem examination that goes beyond the current, still widely used concept of lectures and practicums . Unlike the approach at Halle, an e-learning module was introduced in Dresden as a preparatory tool for the virtual external post-mortem examination and included additional items in the evaluation. Both the self-study unit for preparation and the materials provided were met with a high proportion of favorable evaluations. These results suggest that the modification of the onboarding phase implemented in Dresden should also be adopted in Halle and possibly at other locations. At this point, the modular usability of the virtual reality application also becomes apparent. Acting as a practical component within forensic medicine educational offerings, it was able to supplement dissemination of knowledge by a tutor (Halle) as well as an e-learning programme (Dresden). Additional fields of application are conceivable and should be investigated. The duration of one hour deemed appropriate by nearly two-thirds of the students, and the group size of four participants, which was rated as optimal by all attendees, suggest that the time frame of at least one hour and the small group format should be maintained in the future. This is especially pertinent since teaching units conducted in small groups are associated with higher learning success compared to larger group formats . Simultaneously, considering the preferences of students who favor a longer format and the evaluation results of the pilot study , it would be advisable to extend the course offering to two hours, provided the time frame allows it. Both the technical and the subject-specific support provided by the tutors was rated as exceptionally helpful, yet there was still ample space left for encouraging independent thinking. This balance was also positively highlighted by the students in their open-ended feedback, referring to it as a "pleasant, educational atmosphere." During the course, the support from the tutors, including the possibility of intervention, seems justified given that for most participants, handling virtual reality was largely unfamiliar. In fact, three quarter of students stated no or little experience with VR applications in Dresden which was a consistently high rate and similar to the initial assessment in Halle, four years earlier . Correspondingly, the assessment of the practical operation of the VR format was balanced between easy and difficult handling. According to Speidel et al. , student reservations are due to a lack of experience with VR technology, and its significance for current and future teaching is viewed with some skepticism. Although it is expected that such concerns will decrease as digitalization in the work and leisure environment continues to grow, especially accelerated by the COVID-19 pandemic, this challenge must still be adequately considered when further establishing VR technology in medical education, along with the well-known issue of "motion sickness." In Dresden, this aspect was included as an additional item in the evaluation, with over 40% of participants indicating that they had faced this phenomenon at least occasionally during previous VR usage. As motion sickness is a known issue with VR applications in other fields, a workaround was implemented through task distribution in small groups. This allowed students to follow activities by "looking over the shoulder," eliminating the need to wear VR goggles for the entire duration of the course. In the 2020 study in Halle, the technical implementation and realism of the simulated external post-mortem examination using VR technology were rated as fully or predominantly positive by almost two-thirds of the participants. In the meantime, the application has been further developed and optimized. However, the realism was rated positively by only 53.3% of the participants in Dresden, whereas the technical implementation of the virtual external post-mortem examination was rated as fully or predominantly positive by 91.8%. Since primarily modifications have been made to the interaction concept and the spatial design has remained largely unchanged, an increased demand for aesthetics in virtual environments can be inferred, which may have arisen due to advancements and expectations from the gaming industry. To continuously ensure a realistic representation, it is necessary to optimize the environment according to current standards, now, that this increased expectation has been detected. Regarding the content and learning objectives of the VR training, the evaluation results showed that, at least according to the participants' assessments, forensic skills and abilities could be significantly enhanced. Over 90% of the participants felt much more confident in conducting a practical external post-mortem examination as well as filling out a death certificate after the course. These approval rates were significantly higher than in the previous study in Halle , where the acceptance rate for the practical external post-mortem examination was at 52.5% and for filling out the death certificate was at 62.5%. Whether these differences can be solely attributed to the modified concept in Dresden with an online preparation unit and the technical optimization of the VR application is questionable and should be examined in more detail. In the same way the extent of the actual long-term learning success should be assessed in a corresponding examination format. Furthermore, the course should also be evaluated in a larger cohort than in the limited number of participants in the present pilot event. In the overall assessment, nearly all participants (98.4%) deemed the repetition of forensic medical content as meaningful, and the virtual external post-mortem examination was predominantly positive (87.1%) considered a suitable didactic tool for this purpose. These approval rates were even slightly higher than those in the Halle study , which showed 85.0% for meaningful repetition of content and 64.1% for the suitability of the virtual external post-mortem examination as a didactic instrument. Moreover the highly positive reception of the newly introduced learning concept among students in Dresden was reflected in the overall grades and the rate of recommendation. Other studies have also attributed high didactic potential to VR technology due to its more sustainable learning effects compared to other digital and traditional teaching methods . In the applications in Dresden and Halle, it was determined that the simulated external post-mortem examination using VR technology offers advantages over the real post-mortem examination. These include higher standardization and reproducibility of findings, as well as the effectiveness and independence from spatial, temporal and situational resources . Based on these advantages, the use of VR technology as a testing method is recommended, especially since simulation mannequins have already been successfully used in the practical examination of external post-mortem examination . Whereas, in accordance with Speidel et al. , the reliable establishment of teaching formats using VR technology is a prerequisite. However, in terms of sensory feedback, the simulated external post-mortem examination using VR technology has multiple inherent weaknesses, which comprise the absence of tactile and olfactory sensory impressions and the consequent lack of need to overcome emotional barriers , which are important causes of errors in real external post-mortem examinations . Based on these experiences from the 2020 evaluation, the issue was addressed in Dresden as well, confirming that for over 90% of the students, training on a real corpse remains indispensable despite the learning effect of the virtual external post-mortem examination. Therefore, to achieve medium and long-term improvement in the quality of medical post-mortem examinations, pursuing intensified practical training is recommended, involving a combination of training on both real and virtually simulated corpses. Overall, it can be concluded that the simulated external post-mortem examination using VR technology offers numerous advantages and high didactic potential, along few disadvantages. Undoubtedly, significant technical, material, and personnel efforts were required for the establishment , further development as well as functioning cooperation structures between the involved institutions and a high commitment of the participating individuals. At the same time, the present study has shown that the simulated external post-mortem examination using VR technology can be successfully transferred to other locations and adapted to local conditions. For the efficient application and further development of this learning method, it would be desirable for more sites to participate. After agreeing on appropriate transfer conditions , a larger contingent of various cases could be established for exchange between locations. Furthermore, a type of modular system of suitable learning concepts could be developed, which would address specific curricular requirements. The content of the modules as well as the combination and sequence of the learning content and the impact on learning outcomes depending on prior knowledge must be scrutinized to derive recommendations for action. In order to record comparable user experiences these measures can be accompanied by the automated documentation of user decisions, as well as conducting comparable user tests or establishing suitable test formats. Beside this cross-location dissemination and documentation of increased knowledge, the use of the simulated external post-mortem examination is also conceivable beyond its application in medical studies. Especially in countries with a medical obligation to perform external post-mortem examinations regardless of specialization, this learning method can be applied in the continuing education of physicians. Subsequently, the virtual external post-mortem examination could be implemented in training measures for other professional groups, such as the police or interprofessional fields such as criminology. |
Reply: efficacy and safety of CO | 11d5a415-07bd-4562-a16c-330d112798b2 | 11826619 | Surgical Procedures, Operative[mh] | |
Long Interspersed Nuclear Element-1 Analytes in Extracellular Vesicles as Tools for Molecular Diagnostics of Non-Small Cell Lung Cancer | c7c1b19c-adfe-4a12-9c95-d23402a9f37c | 10816871 | Pathology[mh] | A diploid human genome contains ~100 full-length copies of retrotransposition competent LINE-1 retroelements ( A) . These retroelements are silenced epigenetically in nearly all healthy somatic cells by the interplay between DNA methylation and histone covalent modifications, a process orchestrated in part by retinoblastoma proteins and the NuRD corepressor complex . Disruption of epigenetic silencing can unleash LINE-1 retrotransposition, which entails propagation of LINE-1 DNA, or other DNAs, through a copy-and-paste mechanism using an RNA intermediate. LINE-encoded proteins (ORF1p and ORF2p) exhibit cis - or trans -preference and bind mRNAs to form ribonucleoprotein particles (RNPs). These particles mainly localize to the cytoplasm or are stored in stress granules , but can also translocate to the nucleus where the endonuclease domain of ORF2p nicks a single strand of genomic DNA to expose a 3′-OH group that is used to prime and synthesize LINE-1 cDNA. This can lead to full-length or truncated insertions of LINE-1 or other sequences that modulate genome architecture and function. The reactivation of LINE-1 retroelements can occur in several different contexts, particularly following DNA damage or disruption of epigenetic control by inflammation and oxidative injury . Malignant transformation of somatic cells can also give rise to epigenetic disturbances that erode LINE-1 silencing and allow uncontrolled expression and accumulation of LINE-1 products . Active LINEs are a source of endogenous mutagenesis, with reactivation in somatic cells causing a variety of genetic alterations, including aberrant splicing, exon skipping, gene fusions, and genome rearrangements that alter gene expression and cause genome instability . Furthermore, LINE-1 reactivation creates a positive feedback loop that perpetuates the aberrant behavior of cells carrying mutations in tumor suppressor genes and/or oncogenes . LINE-1 oncogenicity can also involve signaling pathways that are independent of retrotransposition . Given the multifaceted roles of LINE-1 in cancer, we and others have postulated that readouts of LINE-1 activity may serve as indicators of oncogenic transformation. This hypothesis is supported by the strong correlation between LINE-1 expression and malignancy , tumor genomic instability , and cancer mortality . LINE-1 DNA hypomethylation (i.e., activation) is also a common feature across many different cancer types , and correlates with poor clinical outcomes and lung cancer mortality . The clinical utility of LINE-1 methylation status is limited because it depends on the direct testing of tissue biopsies using low-throughput technologies. Molecular biomarkers are sorely needed for lung cancer detection, especially because most cases are detected when curative interventions are no longer a viable option. Further, current screening with low-dose computerized tomography (LD-CT) is fraught with high false-positive rates leaving patients and providers with challenging follow-up decisions . Non-small cell lung cancers (NSCLCs) are strongly impacted by LINE-1 dysregulation. LINE-1 reactivation is prevalent during early stage NSCLCs, especially in smokers , and the genome of NSCLC is strongly affected by LINE-1 insertions . Thus, to harness the diagnostic potential of LINE-1 we have focused efforts on measurements of total plasma LINE-1 or LINE-1 analytes loaded onto extracellular vesicles (EVs) as lung cancer biomarkers. EVs are secreted membrane-bound vesicles containing curated DNA, RNA, and proteins from their cells of origin . EVs are released by nearly all cell types and can be collected from saliva, blood, urine, and cerebrospinal fluid . While the presence of LINE-1 products in EVs has been documented by us and others , detailed analyses to determine if LINE-1 products in EVs can be used for cancer detection and future development of point-of-care diagnostics have not been systematically completed. With these goals in mind, we used a panel of transformed and non-transformed lung epithelial cell lines and human plasma to define the relationship between cellular and EV LINE-1 contents and to evaluate LINE-1 profiles in healthy and diseased states. Here, we report that the LINE-1 cargo in EVs isolated from conditioned media paralleled LINE-1 levels in the cells of origin, under both constitutive and carcinogen-inducible conditions. Among ostensibly healthy subjects, plasma EV LINE-1 content was higher in females than males and in African Americans compared to Hispanic Americans. Among subjects with NSCLC, considerable heterogeneity in plasma EV LINE-1 levels was observed when stratified by cancer stage, race, sex, and tumor type. Further, ORF1p levels in whole plasma using an ELISA platform closely approximated ORF1p and LINE-1 mRNA levels in Evs, indicating that the majority of circulating LINE-1 is contained in Evs. We conclude that measurements of LINE-1 analytes in Evs and plasma may be of value as liquid biopsies to monitor tissue level expression and activity of LINE-1 retroelements. To characterize EVs released by H520 lung cancer cells, EVs were isolated by PEG precipitation of conditioned media for 48 h. H520 cells have high constitutive expression of LINE-1 . EV isolates ranged in diameter from 50 to 225 nm, with most particles concentrating at ~80 or 110 nm ( B), a range consistent with EVs of endocytic origin, mainly exosomes. The preparations were free of contamination with particles of a high diameter. EV protein profiles were examined by Western blotting, using unconditioned medium (UCM) and EV-free medium (EFM) as negative controls and total cell lysate as a reference control ( C). Traces of contaminating protein in EFM were removed by centrifugation coupled with several PEG resuspensions in fresh PBS as evidenced by the low abundance of ORF1p in EFM compared to the enriched EV sample. The exosome markers ALIX, Flotillin-1, and CD9 were enriched in EV fractions and absent in UCM and EFM. The absence of calnexin, a protein enriched in the endoplasmic reticulum, further validated the high quality of the EV preparations. Next, we examined the LINE-1 ORF1p content in EVs. ORF1p is a 40 kDa protein that preferentially forms multimers resistant to denaturation under reducing SDS-PAGE conditions ( A) . The most abundant ORF1p species detected in H520 cell lysates were the monomeric and trimeric forms, while the dimeric form was predominant in EVs. These results are consistent with previous findings showing the ORF1p dimer in EV buoyant density gradients . N-ethyl maleimide treatment did not enhance ORF1p detection under reducing conditions. Protein loading was verified by Ponceau staining ( D). Western blotting of EV isolates from H460 cells, another NSCLC cell line, challenged with the LINE-1 inducer benzo[a]pyrene (BaP), also yielded an ORF1p dimer in EVs ( E), confirming ORF1p EV export in two different NSCLC cell lines. To evaluate the presence of LINE-1 mRNA in EVs, intact H520 EVs were treated with RNAseA to remove contaminating RNA prior to lysis and isolation ( F). β-Actin mRNA was used as a positive control. RNA was also collected from UCM and EFM to monitor background levels. LINE-1 contains no introns; thus, LINE-1 primers cannot distinguish genomic DNA from cDNA. Thus, each LINE-1 sample was normalized to a matched control lacking reverse transcriptase (RTC) to account for residual gDNA signal. As the exon-junction spanning β-Actin primers do not amplify gDNA, the signal was normalized to a non-template control (NTC). The presence of both β-Actin and LINE-1 mRNA was confirmed in EVs, with significant elevations of LINE-1 (526 ± 67.2 FC) detected over the background and β-Actin (2887.6 ± 182 FC) ( p < 0.05). LINE-1 and β-Actin mRNAs were not enriched in the EFM and UCM. 2.1. Profiles of LINE-1 Abundance in NSCLC Cells and Their Corresponding EVs under Constitutive Conditions We next determined whether EV LINE-1 content serves as a proxy of cellular LINE-1 levels . A panel of cells that included the non-transformed bronchial epithelial line, BEAS-2B, along with several NSCLC epithelial cell lines was examined ( A). Cells were allowed to condition EV-depleted media for 48 h before the collection of cell and media fractions. Western blotting showed constitutive ORF1p expression across all lines ( B). A549 cells exhibited the lowest levels of ORF1p, followed in ascending order by BEAS-2B, H441, H1299, H460, H827, and H520 cell lines. As A549 cells consistently exhibited the lowest LINE-1 levels, all subsequent LINE-1 measurements were expressed as a fold change relative to A549 cells or A549 EVs. The next series of studies relied on PEG precipitation for EV isolation in order to facilitate comparisons between cells and plasma. PEG precipitation is incompatible with SDS-PAGE , and therefore ELISA was used to measure ORF1p levels ( C, top row). ORF1p measures in cell lysates using the ELISA platform displayed rank order profiles comparable to those seen by Western blotting ( B). The BEAS-2B, H441, H1299, and H827 cell lines showed moderate ORF1p levels in EVs ranging from 2.6 to 3.2 FC relative to A549 EVs. The H460 and H520 cell lines had the highest ORF1p content with 6.7 and 7.8 FC, respectively. The linear regression showed a significant relationship between cellular and EV ORF1p content ( D, top, p = 0.03, R 2 = 0.64), indicating that ORF1p content between cells and their corresponding EVs is proportional. Parallel analyses of LINE-1 mRNA expression in cells and Evs ( C, bottom row) showed A549 cells to have the lowest expression levels, followed by H460 cells 1.30 FC, BEAS-2B (1.35), H441 (2.84), H827 (2.92), H1299 (3.97), and H520 (9.57). In Evs, H460 cells exhibited 5.29-fold enrichment relative to A549 followed by BEAS-2B (8.44), H441 (12.79), H520 (28.26), H827 (32.44), and H1299 (56.89). Log2 transformation followed by non-linear Spearman’s regression ( D, bottom) showed a significant relationship ( r = 0.89, p = 0.01) between these two variables. Curve fitting yielded a sigmoidal relationship ( r = 0.90) which may be artifactually driven by H520 cells or, alternatively, implicate loading constraints of LINE-1 mRNA in cells with high LINE-1 content. Together, these results demonstrate that constitutive expression of LINE-1 in cells can be approximated by measurements of EV LINE-1 cargo. 2.2. Cellular and EV LINE-1 Levels under Inducible Conditions To determine if the remarkable cell-specific EV profiles seen under constitutive conditions were preserved upon the induction of LINE-1, cells were challenged with BaP, a known lung carcinogen and LINE-1 inducer. H460 cells were exposed to different BaP concentrations and allowed to condition their medium for 48 h . We have previously shown that BaP treatment does not change the number of secreted EVs, though a minor shift in the size of exosomes was noted . There was not a statistically significant change in the number of BAPs secreted after BaP exposure, but it did appear that the size pattern of EVs was altered. See data below. Carcinogen treatment did not compromise cell viability ( A), but readily induced LINE-1 protein ( B). ORF1p was quantified in cells and EVs using the ELISA platform ( C, top row) and expressed as FC relative to DMSO. Cellular ORF1p exhibited concentration-dependent induction profiles ( p < 0.05), with a 5.04 ± 0.16 mean fold induction at 1 µM BaP and a 4.92 ± 1.62 mean fold induction in EVs at the same concentration. ORF1p increases in BaP-treated cells were proportional to their corresponding EVs, as established by linear regression ( D, top; p = 0.03, R 2 = 0.95). Cellular LINE-1 mRNA also increased as a function of increasing BaP concentrations ( C, bottom row), with 1.43, 1.80, and 1.95-fold increases seen in cells treated with 0.25, 0.5, and 1 mM BaP, respectively ( p < 0.05). Mean LINE-1 mRNA in EVs exhibited a concentration-dependent trend, but the response was variable and not significantly different from DMSO. There was, however, a significant relationship between mean cellular and mean EV LINE-1 mRNA and ORF1p levels ( D, bottom; p = 0.048, R 2 = 0.91). Together, these results demonstrate that fluctuations in cellular LINE-1 expression following carcinogen induction are mirrored by their corresponding EVs. 2.3. LINE-1 in EV Isolates from Human Plasma of Ostensibly Healthy Individuals We first examined the abundance of LINE-1 ORF1p in ostensibly healthy subjects to evaluate the degree of inter-individual variability . Twelve subjects (six African Americans and six Hispanics) were matched by age within 2 years, sex, and race ( D). EVs were isolated by ultracentrifugation and normalized by total plasma volume (3 mL). For each subject, half of the preparation was used for measurements of LINE-1 mRNA and for quantification of ORF1p. EV proteins were visualized by Western blotting ( A). The EV markers Annexin 2, Flotillin-1, and ALIX were used as positive controls. ORF1p dimer was present in all donors and exhibited significant variation across the cohort. Protein levels and a Ponceau stain are depicted in . β-Actin, a positive control, was detected in all samples, while LINE-1 mRNA could only be detected in trace amounts relative to the RTC. Despite low detection in human EVs, a positive correlation between EV LINE-1 mRNA and EV ORF1p levels was found ( C; p = 0.004, R 2 = 0.58). This relationship may reflect proportional EV cargo loading of ORF1p and LINE-1 mRNA, arguably in the form of high affinity LINE-1 ribonucleoprotein complexes . In follow-up studies, we used densitometric measurements of ORF1p by Western blots normalized to total protein or mRNA levels to evaluate differences in LINE-1 EV cargo in our cohort by sex and race/ethnicity. ORF1p EV content was higher and exhibited a broader range in females than in males, with a mean ORF1p level of 94.01 arbitrary units versus 21.17 in males ( p = 0.04). Mean LINE-1 mRNA levels were generally higher in females, with EV LINE-1 mRNA content in females at 0.26 log2 FC relative to RTC, which was borderline significant ( p = 0.056) compared to males, with a mean of 0.053 log2 FC relative to RTC. While the profile of LINE-1 mRNA or ORF1p in African Americans compared to Hispanics was not significantly different, African Americans had slightly higher ORF1p content, and 2–3 times greater ranges of LINE-1 values compared to Hispanics. As we normalized EV inputs to total plasma volume, analyses were completed to rule out that the patterns observed were attributed to variations in EV abundance. Indeed, total protein levels exhibited no significant differences between groups . Together, these results reveal that EV LINE-1 content varies considerably between individuals. Importantly, these findings can inform the creation of larger range finding studies to evaluate EV LINE-1 patterns in cancer patients. 2.4. Relationship between Whole Plasma and EV ORF1p Levels Our previous studies noted a relative absence of free LINE-1 products in plasma compared to the highly enriched quantities found in plasma EVs . Thus, we hypothesized that the majority of circulating LINE-1 may be present in EVs, and that these levels may be approximated using whole-plasma ELISA measurements. To test this hypothesis, we measured bulk plasma ORF1p in ostensibly healthy subjects using ELISA and compared these values to measurements of LINE-1 mRNA ( G) and EV ORF1p ( H). Remarkably, a significant relationship between bulk plasma ELISA and EV LINE-1 mRNA levels was found ( R 2 = 0.41, p = 0.024), along with a near-significant relationship with EV ORF1p levels ( R 2 = 0.30, p = 0.063). These results support the hypothesis that circulating LINE-1 is predominantly contained within EVs and that performing a whole-plasma ELISA may approximate these values with a reasonable degree of fidelity. 2.5. EV LINE-1 mRNA in EVs Isolated from Lung Cancer Patients Using a single-blinded approach, LINE-1 mRNA was measured in plasma EVs from 28 patients who underwent lung resection surgery for a lung lesion. EVs were isolated from 250 μL of plasma and processed for measurements of LINE-1 mRNA. The presence of EVs in these preparations was verified by NTA ( A). As a quality control, samples from six donors with less than 2300 particles and a β-Actin Ct greater than 38 were excluded from further analysis ( B). Stratification of EV mRNA profiles based on cancer stage and histopathology showed that Stage 1 subjects had lower EV LINE-1 levels than those with Stages II and IV ( C). Notably, not all Stage II and IV tumors exhibited high LINE-1 levels suggesting that the heterogeneity of response may be present. While mean LINE-1 levels did not significantly differ between squamous cell carcinomas and adenocarcinomas, a trend for higher levels in squamous cell carcinomas and metastatic tumors emerged ( D). The profile of EV LINE-1 mRNA content in NSCLCs paralleled the patterns of ORF1p expression in NSCLC tissue sections compared to non-tumor tissue ( E). We next determined whether EV LINE-1 content serves as a proxy of cellular LINE-1 levels . A panel of cells that included the non-transformed bronchial epithelial line, BEAS-2B, along with several NSCLC epithelial cell lines was examined ( A). Cells were allowed to condition EV-depleted media for 48 h before the collection of cell and media fractions. Western blotting showed constitutive ORF1p expression across all lines ( B). A549 cells exhibited the lowest levels of ORF1p, followed in ascending order by BEAS-2B, H441, H1299, H460, H827, and H520 cell lines. As A549 cells consistently exhibited the lowest LINE-1 levels, all subsequent LINE-1 measurements were expressed as a fold change relative to A549 cells or A549 EVs. The next series of studies relied on PEG precipitation for EV isolation in order to facilitate comparisons between cells and plasma. PEG precipitation is incompatible with SDS-PAGE , and therefore ELISA was used to measure ORF1p levels ( C, top row). ORF1p measures in cell lysates using the ELISA platform displayed rank order profiles comparable to those seen by Western blotting ( B). The BEAS-2B, H441, H1299, and H827 cell lines showed moderate ORF1p levels in EVs ranging from 2.6 to 3.2 FC relative to A549 EVs. The H460 and H520 cell lines had the highest ORF1p content with 6.7 and 7.8 FC, respectively. The linear regression showed a significant relationship between cellular and EV ORF1p content ( D, top, p = 0.03, R 2 = 0.64), indicating that ORF1p content between cells and their corresponding EVs is proportional. Parallel analyses of LINE-1 mRNA expression in cells and Evs ( C, bottom row) showed A549 cells to have the lowest expression levels, followed by H460 cells 1.30 FC, BEAS-2B (1.35), H441 (2.84), H827 (2.92), H1299 (3.97), and H520 (9.57). In Evs, H460 cells exhibited 5.29-fold enrichment relative to A549 followed by BEAS-2B (8.44), H441 (12.79), H520 (28.26), H827 (32.44), and H1299 (56.89). Log2 transformation followed by non-linear Spearman’s regression ( D, bottom) showed a significant relationship ( r = 0.89, p = 0.01) between these two variables. Curve fitting yielded a sigmoidal relationship ( r = 0.90) which may be artifactually driven by H520 cells or, alternatively, implicate loading constraints of LINE-1 mRNA in cells with high LINE-1 content. Together, these results demonstrate that constitutive expression of LINE-1 in cells can be approximated by measurements of EV LINE-1 cargo. To determine if the remarkable cell-specific EV profiles seen under constitutive conditions were preserved upon the induction of LINE-1, cells were challenged with BaP, a known lung carcinogen and LINE-1 inducer. H460 cells were exposed to different BaP concentrations and allowed to condition their medium for 48 h . We have previously shown that BaP treatment does not change the number of secreted EVs, though a minor shift in the size of exosomes was noted . There was not a statistically significant change in the number of BAPs secreted after BaP exposure, but it did appear that the size pattern of EVs was altered. See data below. Carcinogen treatment did not compromise cell viability ( A), but readily induced LINE-1 protein ( B). ORF1p was quantified in cells and EVs using the ELISA platform ( C, top row) and expressed as FC relative to DMSO. Cellular ORF1p exhibited concentration-dependent induction profiles ( p < 0.05), with a 5.04 ± 0.16 mean fold induction at 1 µM BaP and a 4.92 ± 1.62 mean fold induction in EVs at the same concentration. ORF1p increases in BaP-treated cells were proportional to their corresponding EVs, as established by linear regression ( D, top; p = 0.03, R 2 = 0.95). Cellular LINE-1 mRNA also increased as a function of increasing BaP concentrations ( C, bottom row), with 1.43, 1.80, and 1.95-fold increases seen in cells treated with 0.25, 0.5, and 1 mM BaP, respectively ( p < 0.05). Mean LINE-1 mRNA in EVs exhibited a concentration-dependent trend, but the response was variable and not significantly different from DMSO. There was, however, a significant relationship between mean cellular and mean EV LINE-1 mRNA and ORF1p levels ( D, bottom; p = 0.048, R 2 = 0.91). Together, these results demonstrate that fluctuations in cellular LINE-1 expression following carcinogen induction are mirrored by their corresponding EVs. We first examined the abundance of LINE-1 ORF1p in ostensibly healthy subjects to evaluate the degree of inter-individual variability . Twelve subjects (six African Americans and six Hispanics) were matched by age within 2 years, sex, and race ( D). EVs were isolated by ultracentrifugation and normalized by total plasma volume (3 mL). For each subject, half of the preparation was used for measurements of LINE-1 mRNA and for quantification of ORF1p. EV proteins were visualized by Western blotting ( A). The EV markers Annexin 2, Flotillin-1, and ALIX were used as positive controls. ORF1p dimer was present in all donors and exhibited significant variation across the cohort. Protein levels and a Ponceau stain are depicted in . β-Actin, a positive control, was detected in all samples, while LINE-1 mRNA could only be detected in trace amounts relative to the RTC. Despite low detection in human EVs, a positive correlation between EV LINE-1 mRNA and EV ORF1p levels was found ( C; p = 0.004, R 2 = 0.58). This relationship may reflect proportional EV cargo loading of ORF1p and LINE-1 mRNA, arguably in the form of high affinity LINE-1 ribonucleoprotein complexes . In follow-up studies, we used densitometric measurements of ORF1p by Western blots normalized to total protein or mRNA levels to evaluate differences in LINE-1 EV cargo in our cohort by sex and race/ethnicity. ORF1p EV content was higher and exhibited a broader range in females than in males, with a mean ORF1p level of 94.01 arbitrary units versus 21.17 in males ( p = 0.04). Mean LINE-1 mRNA levels were generally higher in females, with EV LINE-1 mRNA content in females at 0.26 log2 FC relative to RTC, which was borderline significant ( p = 0.056) compared to males, with a mean of 0.053 log2 FC relative to RTC. While the profile of LINE-1 mRNA or ORF1p in African Americans compared to Hispanics was not significantly different, African Americans had slightly higher ORF1p content, and 2–3 times greater ranges of LINE-1 values compared to Hispanics. As we normalized EV inputs to total plasma volume, analyses were completed to rule out that the patterns observed were attributed to variations in EV abundance. Indeed, total protein levels exhibited no significant differences between groups . Together, these results reveal that EV LINE-1 content varies considerably between individuals. Importantly, these findings can inform the creation of larger range finding studies to evaluate EV LINE-1 patterns in cancer patients. Our previous studies noted a relative absence of free LINE-1 products in plasma compared to the highly enriched quantities found in plasma EVs . Thus, we hypothesized that the majority of circulating LINE-1 may be present in EVs, and that these levels may be approximated using whole-plasma ELISA measurements. To test this hypothesis, we measured bulk plasma ORF1p in ostensibly healthy subjects using ELISA and compared these values to measurements of LINE-1 mRNA ( G) and EV ORF1p ( H). Remarkably, a significant relationship between bulk plasma ELISA and EV LINE-1 mRNA levels was found ( R 2 = 0.41, p = 0.024), along with a near-significant relationship with EV ORF1p levels ( R 2 = 0.30, p = 0.063). These results support the hypothesis that circulating LINE-1 is predominantly contained within EVs and that performing a whole-plasma ELISA may approximate these values with a reasonable degree of fidelity. Using a single-blinded approach, LINE-1 mRNA was measured in plasma EVs from 28 patients who underwent lung resection surgery for a lung lesion. EVs were isolated from 250 μL of plasma and processed for measurements of LINE-1 mRNA. The presence of EVs in these preparations was verified by NTA ( A). As a quality control, samples from six donors with less than 2300 particles and a β-Actin Ct greater than 38 were excluded from further analysis ( B). Stratification of EV mRNA profiles based on cancer stage and histopathology showed that Stage 1 subjects had lower EV LINE-1 levels than those with Stages II and IV ( C). Notably, not all Stage II and IV tumors exhibited high LINE-1 levels suggesting that the heterogeneity of response may be present. While mean LINE-1 levels did not significantly differ between squamous cell carcinomas and adenocarcinomas, a trend for higher levels in squamous cell carcinomas and metastatic tumors emerged ( D). The profile of EV LINE-1 mRNA content in NSCLCs paralleled the patterns of ORF1p expression in NSCLC tissue sections compared to non-tumor tissue ( E). Aberrant expression of the oncogenic retrotransposon LINE-1 has emerged as a hallmark of NSCLC. Most of the evidence to date has arisen from measurements of LINE-1 encoded ORF1p in cancer tissues at the time of resection, thus limiting the strength of inferences that can be made about the trajectory of illness, treatment response, or clinical outcome. To overcome these limitations, we have proposed measurements of LINE-1 cargo in human plasma-derived EVs as proxies of tissue LINE-1 expression. Here, we present proof of concept evidence that the LINE-1 cargo in EVs isolated from NSCLC cell lines mirrors their cellular content, and that measurements of LINE-1 analytes in plasma EVs can be used to stratify healthy subjects by gender and possibly race/ethnicity, and lung cancer patients by stage and histological type. EV cargo need not be consistently proportional between cells and EVs as the loading of DNA, RNA, and protein may be subject to regulation via multiple biochemical pathways, and influenced by active and passive export mechanisms . Thus, we initiated systematic studies to evaluate the levels of LINE-1 analytes in EVs and their cells of origin. Our findings under both constitutive and inducible conditions revealed that the proportionality between cellular and EV cargo is maintained for both LINE-1 mRNA and ORF1p, providing proof-of-principle that EV LINE-1 may serve as a “liquid biopsy” of LINE-1 tissue levels. While EV ORF1p can be readily detected in vitro and in vivo, EV LINE-1 mRNA was more difficult to detect, a limitation likely caused by the stoichiometry of LINE-1 ribonucleoprotein complex formation, in which one 6 kb LINE-1 mRNA molecule binds up to 240 ORF1p molecules and one molecule of ORF2p . In our studies, ORF1p was the target protein of interest given its high abundance compared to ORF2p, which exhibits low cellular abundance due to deficits during cellular processing . The finding that ORF1p is present in EVs isolated from cells and plasma, and detected as a dimer, was intriguing. We and others have previously reported the presence of an ORF1p dimer in cells, but the factors that dictate the appearance of the dimer relative to the monomer or the trimer have remained elusive . While ORF1p dimers may represent partially denatured trimers , our data argue against this interpretation as dimers were selectively enriched in EVs and segregated from other LINE-1 products. Given our interest in using LINE-1 readouts as biomarkers, we first explored EV LINE-1 cargo in the plasma of ostensibly healthy subjects to evaluate inter-individual variability. Our investigation revealed considerable heterogeneity in LINE-1 expression between individuals, with the source of variability likely reflecting genetic, environmental, and lifestyle interactions that influence circulating and tissue LINE-1 levels. For ostensibly healthy subjects, we identified sex-specific and possibly racial/ethnic differences. Females had higher EV LINE-1 levels than males and African Americans displayed wider ranges of EV LINE-1 values compared to Hispanics. These results are consistent with previous studies showing that males exhibit higher levels of LINE-1 hypermethylation than females and that LINE-1 methylation can vary as a function of race and ethnicity . Moreover, we recently found greater numbers of ORF1p+ cells in African American lung cancer patients than Caucasians, with suggestive evidence that this difference may be related to lower survival rates in African Americans . Together, our findings confirm previous observations and lend support to the conclusion that LINE-1 expression in plasma and by extension in EVs can be influenced by phenotypic traits . As expected, levels of LINE-1 analytes in subjects with lung cancers were broadly distributed as a function of cancer stage and histological type. We found that mean EV LINE-1 mRNA increased with cancer stage and that higher LINE-1 levels were seen in squamous cell carcinomas. These results are consistent with other studies showing altered LINE-1 DNA methylation in NSCLC leading to variable retroelement expression . Saito et al. (2010) examined tumor LINE-1 methylation status in 379 cases of NSCLC and found that LINE-1 hypomethylation increases as a function of cancer stage and that squamous cell carcinoma, an NSCLC subtype linked to smoking , exhibited lower median levels of LINE-1 methylation compared to adenocarcinomas . Consistent with these findings, the H520 squamous cell carcinoma line exhibited the highest levels of LINE-1 expression. In contrast, the lowest LINE-1 levels were seen in adenocarcinoma patients and the adenocarcinoma A549 cell line. Previous studies have shown that LINE-1 may be particularly useful in diagnostics for early stage cancers, as Stages IA and IB exhibit disparate levels of DNA methylation and LINE-1 hypomethylation can be used to identify early NSCLC among current smokers . A recent study by Taylor et al. (2023) examined circulating ORF1p levels in plasma using a novel single-molecule array (Simoa) assay and found that plasma ORF1p was elevated in individuals with various cancers, including NSCLC lung cancer . Among these individuals, a greater proportion of squamous cell carcinomas were LINE-1 positive compared to other histological types. This study also confirmed the presence of ORF1p within EVs; although, it also reported an abundance of free-floating ORF1p. While our methodology detected EV ORF1p in healthy subjects, using the Simoa assay, detection greatly depended on the protein sequence of the capture/detection nanobody. As such, differences may be explained by methodology-related differences in the detection of multimeric forms of ORF1p. An additional consideration regarding the diagnostic use of LINE-1 EVs in complex matrices such as plasma is the relative enrichment of EVs from target tissues compared to other sources. We hypothesize that the bulk of LINE-1 readily detected in plasma originates from tissues that exhibit constitutive LINE-1 expression such as the brain , esophagus, prostate, stomach, or heart muscle . Our findings raise significant questions regarding the functional consequences of circulating EV LINE-1. We and others have noted that EVs containing LINE-1 products possess reverse transcriptase activity and are capable of modifying the DNA of recipient cells . Enhanced EV LINE-1 export from tumors may therefore increase the potential for genomic aberrations and transformation of near or distant tissues that take up these EVs. Further mechanistic studies will be required to explore these dynamics. Future studies can examine whether the depletion of circulating NSCLC EVs reduces cancer progression. In conclusion, our findings suggest that measurements of LINE-1 analytes in EVs may serve as a proxy for tracking changes in NSCLC. Efforts to understand the diagnostic potential of LINE-1 as a cancer biomarker will require the study of large cohorts along with the monitoring of environmental and lifestyle factors that may influence circulating LINE-1 levels. 4.1. Tissue Culture Cell lines were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA) and confirmed to be free of mycoplasma contamination . All cell lines were cultured at 37 °C, 5% CO 2 in a humidified environment, and seeded to 70% confluence for 24 h. Cell lines were cultured as follows: BEAS-2B cells were cultured in LHC-9 medium, A549 cells in DMEM plus 10% fetal bovine serum (FBS), and the remaining lines in RPMI plus 10% FBS (Gibco/ThermoFisher Scientific, Waltham, MA, USA). To prepare for EV collection, cells were washed with DPBS and incubated in media containing EV-depleted FBS (Gibco/ThermoFisher Scientific) for 48 h. Conditioned media were collected, frozen, and processed for EV isolation upon thawing. To study inducible LINE-1 responses, 2 × 10 6 H460 cells were seeded in 150 mm dishes and allowed to attach overnight. The next day, cells were washed with DPBS and incubated with RPMI medium containing 10% EV-depleted FBS plus various concentrations of benzo(a)pyrene (BaP), a lung carcinogen and known inducer of LINE-1 in normal and transformed lung epithelial cell lines , dissolved in 0.2% DMSO. Conditioned media were collected after 48 h and processed for EV isolation and measurement of cellular viability by trypan blue dye exclusion. Unconditioned media (UCM) was used as media control. Conditioned EV-free media (EFM) were used to measure the relative abundance of free versus EV-associated LINE-1 cargo. 4.2. EV Isolation and Characterization For in vitro experiments, approximately 120 mL of conditioned media was centrifuged for 5 min at 500× g and concentrated to <12 mL using Centricon Plus-70 100 kDa MWCO centrifugal filter units (Millipore-Sigma, Darmstadt, Germany). The concentrate was centrifuged at 21,800× g for 30 min to pellet cellular debris, apoptotic bodies, and large vesicles. EVs were isolated using Polyethylene Glycol (PEG) or ultracentrifugation, as indicated. For PEG isolation, PEG-MW 6000 (Millipore-Sigma) was added to the supernatant to 10% w / v , incubated on ice for 15 min, and centrifuged at 21,800× g for 15 min. To remove protein contaminants, the EV pellet was resuspended in PBS, transferred to a clean tube, and reprecipitated in 10% PEG. PEG EV pellets were resuspended in PBS and confirmed to be free of debris using Nanosight Nanoparticle Tracking Analysis (NTA) (Malvern Panalytical, Malvern, UK). For ultracentrifugation, concentrated media were centrifuged at 100,000× g for 2 h at 4 °C. Pellets were washed in cold PBS and the centrifugation was repeated before resuspension of EV pellets in PBS. Protein was used to normalize EV input in in vitro experiments. Nanosight NTA of cell media preparations was performed by the Center for Nanotechnology in Drug Delivery at the University of North Carolina Chapel Hill. EVs in PBS were diluted and quantified with a NanoSight NS500 (Malvern) equipped with red laser illumination (532 nm). Each sample was read five times, using four samples per treatment. 4.3. Collection of Plasma EVs Plasma was purchased from BioIVT (Westbury, NY, USA) from ostensibly healthy donors who were screened verbally to rule out serious health conditions such as prior metastatic disease, diabetes, or other inflammatory diseases. Plasma was collected using sodium citrate. Plasma was also obtained from patients undergoing surgical resection for a lung lesion at Houston Methodist Hospital. The collection of plasma and nodular tissue was approved by the Institutional Review Board at Houston Methodist Research Institute (Protocol # 00004763). Heparinized venous blood was collected prior to nodule resection and the final pathology report was obtained to identify tumor characteristics and staging for each patient. For plasma EV isolation, samples were thawed, and platelets were removed after two centrifugation cycles at 3600× g . Cleared plasma was diluted 4× with DPBS and centrifuged at 21,800× g for 30 min to pellet debris. The supernatant was ultracentrifuged as described above. EV inputs were normalized by input plasma volume. 4.4. Western Blotting Cells and EVs were lysed in RIPA buffer and subjected to Western blotting as described . Immunoblots were imaged using a Konica Minolta SRX101A film developer (Konica Minolta Medical & Graphic, Inc., Tokyo, Japan). The ORF1p polyclonal antibody used was custom-made using the 14-amino acid N-terminus of the ORF1p protein (ORF1p1-14). The specificity of this antibody has been confirmed in previous studies . The monoclonal ORF1p antibody used was purchased from Millipore-Sigma (MABC1152). Antibodies to ALIX (CST, 92880), CD9 (CST, 13403), Flotillin-1 (CST, 18634), GAPDH (CST, 2118), and Calnexin (MA5-31501) were obtained commercially via Cell Signaling Technology (Danvers, MA, USA) and ThermoFisher. 4.5. ELISA ORF1p was measured using a competitive indirect ELISA, where indicated, as this platform is not subject to PEG interference . Briefly, pre-blocked streptavidin-coated plates (Pierce/ThermoFisher) were incubated with biotinylated ORF1p1-14. After washing with PBS/0.01% Tween, diluted plasma, standards, and primary antibody were mixed, added to the plates, and incubated for 1 h. Standards were generated using known quantities of ORF1p1-14 and a matrix control of EV pellets derived from murine cells. After washing and incubation with HRP-linked secondary antibody (ThermoFisher), the ELISA was developed using TMB (Pierce/ThermoFisher) substrate and subsequently quenched with sulfuric acid. Absorbance was read at 450 nM. 4.6. LINE-1 mRNA Cargo and Quantification EVs from H520 cells were used to evaluate LINE-1 mRNA content. Briefly, RNAseA was added to EVs in DPBS and incubated for 10 min at room temperature. RNA Secure (Invitrogen/ThermoFisher) was used for RNA extraction. EV protein was used to normalize EV input across cell lines and treatments. RNA was extracted from EV pellets using a Quick RNA kit (Zymo, Irvine, CA, USA) with DNAseI digestion (Turbo DNA Free, Invitrogen/ThermoFisher). RNA was eluted in equal volumes of nuclease-free water, and 3 μL was used for each RT-qPCR reaction. LINE-1 and β-Actin were quantified using Luna Universal One-Step RT-qPCR (NEB, Ipswich, MA, USA) according to the manufacturer’s protocol, with duplicate reactions for each sample. LINE-1 contains no introns; thus, LINE-1 primers cannot distinguish between residual genomic (g) DNA and cDNA. Thus, each LINE-1 sample was normalized to a matched control lacking reverse transcriptase (RTC) to control for residual gDNA. As exon-junction spanning β-Actin primers do not amplify gDNA, the β-Actin signal was normalized to a non-template control (NTC). All primers had amplification efficiencies of >90%. LINE-1 primer: Forward 5′ACACCTATTCCAAAATTGACCAC 3, Reverse 5′ TTCCCTCTACACACTGCTTTGA 3′, and probe 5′ TG GAAACTGAACAACCTGCTCCTGA 3′. β-Actin primers: Forward 5′ CTGGCACCCAGCACAATG 3′, Reverse 5′ GCCGATCCACACGGAGTACT 3′, Probe: 5′ ATCAAGATCATTGCTCCTCCTGAGCGC 3′. 4.7. Immunohistochemistry Sections of human lung tumors and patient-matched non-tumor adjacent tissues were deparaffinized in xylene and rehydrated in a graded ethanol series. HistoZyme was performed for antibody retrieval. Endogenous peroxidase was inactivated by treating tissue sections with 3% H 2 O 2 /methanol for 15 min. The slides were washed five times for five min each and the sections blocked with 5% goat serum containing 0.1% Triton X-100 at room temperature for 1 h, followed by labeling with primary ORF1 antibody (Millipore) at 1:200 at 4 °C overnight. The primary antibody was labeled with HRP-conjugated secondary antibodies at 1:200 for 1 h at room temperature. The signal was visualized using a DAB substrate kit. Stained sections were dehydrated and mounted. Images were taken with Nikon eclipse Zyla SCMOS microscope at 40× magnification. 4.8. Statistical Analysis Statistical analyses were performed using GraphPad Prism 8.1.2. A p -value of less than 0.05 was considered significant. EV LINE-1 mRNA enrichment was compared to background levels using a one-tail t -test. The association between cellular and EV ORF1p with or without BaP treatment was established using simple linear regression. The relationship between cellular and EV mRNA was tested using Spearman’s non-linear regression. BaP treatments were compared to the DMSO control using a one-way ANOVA with Dunnett’s multiple comparisons or Kruskal–Wallace test with Dunn’s multiple comparisons, as indicated. The relationship between EV LINE-1 mRNA and ORF1p content in healthy plasma EVs was assessed by simple linear regression. The mean EV LINE-1 mRNA and ORF1p content were compared between males and females and African American and Hispanic American subjects using unpaired t -tests. Whole plasma ORF1p was compared to EV ORF1p levels using a simple linear regression. Cell lines were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA) and confirmed to be free of mycoplasma contamination . All cell lines were cultured at 37 °C, 5% CO 2 in a humidified environment, and seeded to 70% confluence for 24 h. Cell lines were cultured as follows: BEAS-2B cells were cultured in LHC-9 medium, A549 cells in DMEM plus 10% fetal bovine serum (FBS), and the remaining lines in RPMI plus 10% FBS (Gibco/ThermoFisher Scientific, Waltham, MA, USA). To prepare for EV collection, cells were washed with DPBS and incubated in media containing EV-depleted FBS (Gibco/ThermoFisher Scientific) for 48 h. Conditioned media were collected, frozen, and processed for EV isolation upon thawing. To study inducible LINE-1 responses, 2 × 10 6 H460 cells were seeded in 150 mm dishes and allowed to attach overnight. The next day, cells were washed with DPBS and incubated with RPMI medium containing 10% EV-depleted FBS plus various concentrations of benzo(a)pyrene (BaP), a lung carcinogen and known inducer of LINE-1 in normal and transformed lung epithelial cell lines , dissolved in 0.2% DMSO. Conditioned media were collected after 48 h and processed for EV isolation and measurement of cellular viability by trypan blue dye exclusion. Unconditioned media (UCM) was used as media control. Conditioned EV-free media (EFM) were used to measure the relative abundance of free versus EV-associated LINE-1 cargo. For in vitro experiments, approximately 120 mL of conditioned media was centrifuged for 5 min at 500× g and concentrated to <12 mL using Centricon Plus-70 100 kDa MWCO centrifugal filter units (Millipore-Sigma, Darmstadt, Germany). The concentrate was centrifuged at 21,800× g for 30 min to pellet cellular debris, apoptotic bodies, and large vesicles. EVs were isolated using Polyethylene Glycol (PEG) or ultracentrifugation, as indicated. For PEG isolation, PEG-MW 6000 (Millipore-Sigma) was added to the supernatant to 10% w / v , incubated on ice for 15 min, and centrifuged at 21,800× g for 15 min. To remove protein contaminants, the EV pellet was resuspended in PBS, transferred to a clean tube, and reprecipitated in 10% PEG. PEG EV pellets were resuspended in PBS and confirmed to be free of debris using Nanosight Nanoparticle Tracking Analysis (NTA) (Malvern Panalytical, Malvern, UK). For ultracentrifugation, concentrated media were centrifuged at 100,000× g for 2 h at 4 °C. Pellets were washed in cold PBS and the centrifugation was repeated before resuspension of EV pellets in PBS. Protein was used to normalize EV input in in vitro experiments. Nanosight NTA of cell media preparations was performed by the Center for Nanotechnology in Drug Delivery at the University of North Carolina Chapel Hill. EVs in PBS were diluted and quantified with a NanoSight NS500 (Malvern) equipped with red laser illumination (532 nm). Each sample was read five times, using four samples per treatment. Plasma was purchased from BioIVT (Westbury, NY, USA) from ostensibly healthy donors who were screened verbally to rule out serious health conditions such as prior metastatic disease, diabetes, or other inflammatory diseases. Plasma was collected using sodium citrate. Plasma was also obtained from patients undergoing surgical resection for a lung lesion at Houston Methodist Hospital. The collection of plasma and nodular tissue was approved by the Institutional Review Board at Houston Methodist Research Institute (Protocol # 00004763). Heparinized venous blood was collected prior to nodule resection and the final pathology report was obtained to identify tumor characteristics and staging for each patient. For plasma EV isolation, samples were thawed, and platelets were removed after two centrifugation cycles at 3600× g . Cleared plasma was diluted 4× with DPBS and centrifuged at 21,800× g for 30 min to pellet debris. The supernatant was ultracentrifuged as described above. EV inputs were normalized by input plasma volume. Cells and EVs were lysed in RIPA buffer and subjected to Western blotting as described . Immunoblots were imaged using a Konica Minolta SRX101A film developer (Konica Minolta Medical & Graphic, Inc., Tokyo, Japan). The ORF1p polyclonal antibody used was custom-made using the 14-amino acid N-terminus of the ORF1p protein (ORF1p1-14). The specificity of this antibody has been confirmed in previous studies . The monoclonal ORF1p antibody used was purchased from Millipore-Sigma (MABC1152). Antibodies to ALIX (CST, 92880), CD9 (CST, 13403), Flotillin-1 (CST, 18634), GAPDH (CST, 2118), and Calnexin (MA5-31501) were obtained commercially via Cell Signaling Technology (Danvers, MA, USA) and ThermoFisher. ORF1p was measured using a competitive indirect ELISA, where indicated, as this platform is not subject to PEG interference . Briefly, pre-blocked streptavidin-coated plates (Pierce/ThermoFisher) were incubated with biotinylated ORF1p1-14. After washing with PBS/0.01% Tween, diluted plasma, standards, and primary antibody were mixed, added to the plates, and incubated for 1 h. Standards were generated using known quantities of ORF1p1-14 and a matrix control of EV pellets derived from murine cells. After washing and incubation with HRP-linked secondary antibody (ThermoFisher), the ELISA was developed using TMB (Pierce/ThermoFisher) substrate and subsequently quenched with sulfuric acid. Absorbance was read at 450 nM. EVs from H520 cells were used to evaluate LINE-1 mRNA content. Briefly, RNAseA was added to EVs in DPBS and incubated for 10 min at room temperature. RNA Secure (Invitrogen/ThermoFisher) was used for RNA extraction. EV protein was used to normalize EV input across cell lines and treatments. RNA was extracted from EV pellets using a Quick RNA kit (Zymo, Irvine, CA, USA) with DNAseI digestion (Turbo DNA Free, Invitrogen/ThermoFisher). RNA was eluted in equal volumes of nuclease-free water, and 3 μL was used for each RT-qPCR reaction. LINE-1 and β-Actin were quantified using Luna Universal One-Step RT-qPCR (NEB, Ipswich, MA, USA) according to the manufacturer’s protocol, with duplicate reactions for each sample. LINE-1 contains no introns; thus, LINE-1 primers cannot distinguish between residual genomic (g) DNA and cDNA. Thus, each LINE-1 sample was normalized to a matched control lacking reverse transcriptase (RTC) to control for residual gDNA. As exon-junction spanning β-Actin primers do not amplify gDNA, the β-Actin signal was normalized to a non-template control (NTC). All primers had amplification efficiencies of >90%. LINE-1 primer: Forward 5′ACACCTATTCCAAAATTGACCAC 3, Reverse 5′ TTCCCTCTACACACTGCTTTGA 3′, and probe 5′ TG GAAACTGAACAACCTGCTCCTGA 3′. β-Actin primers: Forward 5′ CTGGCACCCAGCACAATG 3′, Reverse 5′ GCCGATCCACACGGAGTACT 3′, Probe: 5′ ATCAAGATCATTGCTCCTCCTGAGCGC 3′. Sections of human lung tumors and patient-matched non-tumor adjacent tissues were deparaffinized in xylene and rehydrated in a graded ethanol series. HistoZyme was performed for antibody retrieval. Endogenous peroxidase was inactivated by treating tissue sections with 3% H 2 O 2 /methanol for 15 min. The slides were washed five times for five min each and the sections blocked with 5% goat serum containing 0.1% Triton X-100 at room temperature for 1 h, followed by labeling with primary ORF1 antibody (Millipore) at 1:200 at 4 °C overnight. The primary antibody was labeled with HRP-conjugated secondary antibodies at 1:200 for 1 h at room temperature. The signal was visualized using a DAB substrate kit. Stained sections were dehydrated and mounted. Images were taken with Nikon eclipse Zyla SCMOS microscope at 40× magnification. Statistical analyses were performed using GraphPad Prism 8.1.2. A p -value of less than 0.05 was considered significant. EV LINE-1 mRNA enrichment was compared to background levels using a one-tail t -test. The association between cellular and EV ORF1p with or without BaP treatment was established using simple linear regression. The relationship between cellular and EV mRNA was tested using Spearman’s non-linear regression. BaP treatments were compared to the DMSO control using a one-way ANOVA with Dunnett’s multiple comparisons or Kruskal–Wallace test with Dunn’s multiple comparisons, as indicated. The relationship between EV LINE-1 mRNA and ORF1p content in healthy plasma EVs was assessed by simple linear regression. The mean EV LINE-1 mRNA and ORF1p content were compared between males and females and African American and Hispanic American subjects using unpaired t -tests. Whole plasma ORF1p was compared to EV ORF1p levels using a simple linear regression. |
Radioligand therapy in the therapeutic strategy for patients with gastro-entero-pancreatic neuroendocrine tumors: a consensus statement from the Italian Association for Neuroendocrine Tumors (Itanet), Italian Association of Nuclear Medicine (AIMN), Italian Society of Endocrinology (SIE), Italian Association of Medical Oncology (AIOM). | ca06459d-4f87-4261-951e-f9d2a570a5b9 | 11729074 | Internal Medicine[mh] | Neuroendocrine neoplasms (NENs) comprise a heterogeneous group of malignancies arising from the diffuse neuroendocrine cell system. Gastroenteropancreatic (GEP) NENs represent the most common subtype, with an increasing worldwide incidence over the past decades . According to their histopathological features, mitotic count, and Ki-67 index, GEP-NENs are classified as neuroendocrine tumors (NETs) or neuroendocrine carcinomas (NECs). GEP-NETs are well-differentiated neoplasms, defined as grade G1 (Ki-67 < 3%, mitotic count < 2/2 mm²), G2 (Ki-67 3–20%, mitotic count 2–20/2 mm²), or G3 (Ki-67 > 20%, mitotic count > 20/2 mm²). In contrast, GEP-NECs are aggressive and poorly differentiated neoplasms G3 (Ki-67 > 20%, mitotic count > 20/2 mm²) . The majority of GEP-NETs are sporadic and non-functional . Therapy goals encompass tumor excision with curative intent and/or the halting of disease progression, and the control of clinical symptoms in functional NETs. Surgery, if feasible, represents the primary and only curative approach for localized GEP-NET G1 or G2 but may also be considered in the context of advanced NETs for palliative resection, debulking surgery, or hepatic metastasectomy. At diagnosis, up to 80% of GEP-NETs are locally advanced or metastatic; therefore, non-surgical strategies such as somatostatin analogs (SSA), radioligand therapy (RLT), targeted therapies with the mTOR inhibitor everolimus or the multiple tyrosine kinase inhibitor sunitinib, and systemic chemotherapy, should be evaluated. Specifically, RLT is an effective and relatively safe option that has been investigated for over 20 years in well-differentiated NETs expressing somatostatin receptors (SSTR). RLT involves of administering radionuclide-labeled SSA, which selectively targets NET cells. The role of RLT in NENs is evolving, and novel strategies are under evaluation, including the implementation of new radiopharmaceuticals, combination with other therapies, or intra-arterial administration . Currently, [177Lu]Lu-[DOTA0,Tyr3]-octreotate (177Lu-DOTATATE) is indicated for unresectable, metastatic or locally advanced, G1 or G2, SSTR-positive GEP-NETs as a second-line option after SSA. The approval by the European Medicines Agency (EMA) in 2017 and the US Food and Drug Administration (FDA) in 2018 was strongly encouraged by the hallmark phase III NETTER-1 trial , which demonstrated a significant improvement in PFS, response rate, and quality of life (QoL) in the 177Lu-DOTATATE arm compared to high-dose octreotide (60 mg/month) in patients with advanced midgut NETs progressive on SSA. To date, the optimal therapeutic algorithm for GEP-NETs, comprising the role of RLT, has not been standardized. Current clinical practice considers RLT when progression occurs on previous pharmacological treatment. The European Neuroendocrine Tumor Society (ENETS) supports the role of RLT in intestinal NETs as second-line therapy after the failure of SSA or as third-line therapy after the failure of everolimus . Regarding pancreatic NETs (panNETs), RLT is recommended in lower-grade NETs in case of progression after SSA, chemotherapy, or targeted drugs (everolimus/sunitinib) . The European Society of Medical Oncology (ESMO) guidelines encourage considering RLT earlier in the treatment sequence, especially in panNETs. According to ESMO guidelines, RLT is recommended as second-line therapy in progressive midgut NETs after SSA but may also be considered in carefully selected NET G3 cases . Both ENETS and ESMO guidelines recognize the role of RLT in managing carcinoid syndrome or functional NETs refractory to SSA . Given the non-complete uniformity of the current recommendations, it is crucial to provide clinicians with clear and well-structured guidance for personalized therapeutic decisions in real-world clinical practice. Therapy should be tailored to each patient according to tumor pathological and functional status, SSTR imaging, patient choice, and comorbidities. Therefore, multidisciplinary care of patients affected by GEP-NETs at referral centers is pivotal in integrating and optimizing diagnostic and therapeutic strategies . This work was developed by representatives from each of the participating scientific societies. After an initial web meeting, 10 questions were identified, focusing on the role of RLT in GEP-NETs, as detailed in Table . The questions were limited to sporadic, well-differentiated tumors, excluding high-grade NEC and non-sporadic tumors related to hereditary syndromes. Hence, the manuscript consistently uses the term “NET” in this context. Each question was addressed by a specialized team from the societies, leveraging their expertise. They conducted a PubMed literature search using the following keywords: (“radioligand therapy” OR “peptide receptor radionuclide therapy” OR “PRRT”) AND (“gastroenteropancreatic neuroendocrine tumors” OR “GEP-NETs” OR “gastroenteropancreatic NETs” OR “gastrointestinal neuroendocrine tumors” OR “pancreatic neuroendocrine tumors”). Since 177Lu-Dotatate is the only therapy approved by authorities for treating patients with GEP-NET, the literature search was limited to articles covering 177Lu-DOTATATE exclusively. Studies focusing on treatments with other radioligands were considered outside the scope of this work. Recommendations are provided based on the highest quality evidence available and the collective expertise of the authors. These are categorized by both the level of evidence (ranging from 1 to 5) and the strength of the recommendation (graded A to D), as outlined in suppl. Table according to the GRADE system . The manuscript was refined through textual email discussions and virtual meetings in October 2023, January 2024, and April 2024, leading to a consensus draft. After external review and approval from the executive boards of all societies, the final draft was endorsed. Statements Q1. Who is the potential candidate for treatment with RLT? RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. Recommendation The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Q2. How should progressive disease be defined before planning RLT? Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. Recommendation An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). Q3. If and how does the FDG PET influence the decision to perform RLT? While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. Recommendation [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). Q4. What is the evidence for choosing RLT versus targeted agents after the failure of somatostatin analogues? The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Q5. What is the evidence for choosing RLT versus chemotherapy after the failure of somatostatin analogs? Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). Q6. What is the evidence for choosing RLT versus high-dose somatostatin analogs after the failure of standard-dose somatostatin analogs in NF NETs? While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). Q7. How and when should the efficacy of RLT be monitored after initiating treatment? 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . Recommendation RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Q8. How to manage frail patients who have to undergo RLT? Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. Recommendation RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Q9. Is there a room for RLT in G3 GEP-NETs? Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. Recommendation As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). Q10. Is there a rationale for repeating RLT treatment? The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Recommendation Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). Q1. Who is the potential candidate for treatment with RLT? RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. Recommendation The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Q2. How should progressive disease be defined before planning RLT? Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. Recommendation An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). Q3. If and how does the FDG PET influence the decision to perform RLT? While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. Recommendation [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). Q4. What is the evidence for choosing RLT versus targeted agents after the failure of somatostatin analogues? The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Q5. What is the evidence for choosing RLT versus chemotherapy after the failure of somatostatin analogs? Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). Q6. What is the evidence for choosing RLT versus high-dose somatostatin analogs after the failure of standard-dose somatostatin analogs in NF NETs? While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. Recommendation In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). Q7. How and when should the efficacy of RLT be monitored after initiating treatment? 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . Recommendation RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Q8. How to manage frail patients who have to undergo RLT? Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. Recommendation RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Q9. Is there a room for RLT in G3 GEP-NETs? Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. Recommendation As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). Q10. Is there a rationale for repeating RLT treatment? The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Recommendation Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). RLT with 177Lu-DOTATATE is currently approved by both the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) for the treatment of unresectable or metastatic, progressive, well-differentiated, G1/G2, SSTR-positive GEP-NETs. This indication is based on the multicenter, phase III, randomized, open-label NETTER-1 trial and large retrospective cohort studies . The NETTER-1 trial randomized 229 patients with well-differentiated, metastatic midgut NETs who progressed on standard dose octreotide LAR to receive either 177Lu-DOTATATE at 7.4 GBq every 8 weeks or octreotide i.m. at 60 mg every 4 weeks. The estimated rate of PFS at month 20 was 65% in the 177Lu-DOTATATE arm and 11% in the control arm (HR: 0.21, P < 0.0001), with consistent benefits across major prespecified subgroups. Moreover, RLT with 177Lu-DOTATATE significantly improved many QoL domains compared with high-dose octreotide . While the NETTER-1 trial enrolled only patients with midgut NETs, a large body of evidence suggests that RLT with 177Lu-DOTATATE is also safe and effective in SSTR-positive pancreatic and hindgut primaries . More recently, the multicenter, phase III, randomized, open-label NETTER-2 trial has investigated 177Lu-DOTATATE plus octreotide versus high-dose octreotide in patients with newly diagnosed, advanced, SSTR-positive G2/G3 GEP-NETs with Ki-67 ranging between 10% and 55% . The median PFS was significantly prolonged in the investigational arm (22.8 months) compared to the control arm (8.5 months; stratified HR: 0.28, p < 0.0001), with a significantly higher overall response rate (ORR) in the 177Lu-DOTATATE arm (43%) versus the high-dose octreotide arm (9.3%; OR: 7.81, p < 0.0001). On this basis, likely, regulatory authorities will formally expand the indications for RLT to include frontline treatment of patients with GEP-NETs harboring a Ki-67 between 10% and 55%. At present, potential candidates for RLT with 177Lu-DOTATATE include patients with advanced SSTR-positive GEP-NETs who have progressed on prior SSA therapy. Since high tumor burden negatively impacts the efficacy of RLT , early placement of RLT in the therapeutic algorithm is advocated. Therefore, all patients with SSTR-positive advanced GEP-NETs progressive on first-line treatment should be considered for RLT. In patients with bulky, symptomatic disease (particularly in the case of pancreatic primaries) who need rapid tumor shrinkage, chemotherapy might be preferred over RLT. In the future, potential candidates for RLT will also include patients with newly diagnosed G2/G3 GEP-NETs and Ki-67 ranging between 10% and 55%. The progressive expansion of the patient population potentially amenable to treatment with 177Lu-DOTATATE, in line with the advent of 177Lu-PSMA-617 for the treatment of prostate cancer , might pose several challenges from a production and drug administration standpoint. Timely preparation is needed to avoid bottlenecks and allow the administration of RLT to all potential candidates without delays. The candidate for RLT is a patient with advanced (unresectable or metastatic) SSTR-positive GEP-NET who has progressed on prior therapy with SSA. For these patients, early incorporation of 177Lu-DOTATATE RLT into the treatment algorithm is recommended (1b - A). Assessing disease progression in GEP-NETs before planning RLT involves a thorough evaluation using various clinical, imaging, and laboratory methods. Here are the key steps and considerations in assessing disease progression. Imaging Studies: Utilize radiological imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) scans to assess evidence of primary tumors and metastasis and estimate tumor burden . These investigations help quantify neoplastic infiltration, pleural or ascitic fluid volume, and the presence of carcinoid heart disease (evaluated by echocardiography). CT and MRI also identify previously unrecognized lesions or conditions needing urgent treatment, such as pathological spinal fractures, and are essential for ruling out indications for locoregional therapies like embolization or chemoembolization in patients with liver-only disease . Functional Imaging: Functional imaging, particularly 68-Gallium-SSTR PET scans (SSTR-PET), is specific for NETs . This imaging modality helps identify the presence of SSTRs on tumor cells, guiding the selection of patients suitable for RLT. For lesions with high proliferative indexes, [18 F]FDG PET/CT may complement the assessment by visualizing heightened metabolic activity, thus refining the evaluation of lesions targeted with alternative therapies . Recent advancements include the introduction of volumetric parameters like SSR-derived tumor volume and total lesion SSR as tools to aid in predicting PFS before RLT . Biomarkers: While specific tumor markers are assessed in functioning tumors associated with clinical syndromes, the use of biochemical markers like chromogranin A, alkaline phosphatase, or alterations in transaminase ratios, has been proposed to predict therapy effectiveness, although without definitive evidence of their predictive significance . Elevated chromogranin A levels alone should not be considered definitive evidence of disease progression due to the marker’s low specificity. Histological Evaluation: For long-term survivors with multiple secondary disease localizations and historical biopsies, it’s crucial to consider a further histological evaluation before planning RLT due to the potential change in tumor grade over time . This is especially pertinent if the historical biopsy was from the primary tumor and there has been a significant increase in metastatic lesion number and sites. Performing an [18 F]FDG PET/CT scan may help guide the selection of the most aggressive metastasis for biopsy. Clinical Symptoms: Assess the patient’s symptoms, including changes in flushing, diarrhea, abdominal pain, or other related symptoms. Worsening or new symptoms may indicate disease progression, necessitating a CT, MRI, or PET scan to provide a comprehensive overview of the patient’s clinical condition. Multidisciplinary Team Consultation: Engage a multidisciplinary team experienced in managing GEP-NETs, including oncologists, endocrinologists, gastroenterologists, radiologists, nuclear medicine specialists, pathologists, and surgeons, in the assessment process. Discuss the patient’s case to ensure a comprehensive understanding of the disease status and align with the patient’s will and expectations. Multidisciplinary management significantly enhances care levels in patients with GEP-NETs . It is essential to approach disease progression assessment in GEP-NETs using these methods. Treatment decisions are often based on a comprehensive evaluation of all available information, with plans typically personalized to each patient’s specific situation, considering factors like tumor grade, location, and overall health status. An accurate multidisciplinary assessment of patients who are candidates for RLT is mandatory before initiating treatment. This assessment should include a complete radiological evaluation using CT and/or MRI, as well as SSTR-PET. In selected patients with a significant change in disease behavior—such as a noticeable increase in tumor lesions or an evident increase in tumor burden—performing [18 F]FDG PET/CT and/or repeating the histological evaluation may be proposed (3a - A). While [18 F]FDG PET/CT is not typically the primary imaging modality for GEP-NETs, it can be informative in certain cases and may influence decisions regarding RLT administration. EANM and ENETS guidelines recommend including [18 F]FDG PET/CT in the diagnostic pathway for higher G2 (Ki67: 10–20%), G3 NET, and NEC. The 2020 ESMO guidelines offer broader recommendations, suggesting the evaluation of both [18 F]FDG PET/CT and SSTR-PET for all G2-G3 NETs . However, [18 F]FDG PET/CT can also be positive in low-grade NETs of the G1 type, maintaining an unfavorable prognostic significance even in these tumors, confirming that the role of this technique in low-proliferation forms still needs full clarification . Some previous studies have investigated the use of both tracers, but they rely on retrospective data from populations that are not homogeneous regarding the primary lesion . SSTR-PET and [18 F]FDG PET/CT together may be indicated for certain cases, including at initial diagnosis for intermediate proliferative activity tumors and during follow-up when assessing treatment changes or discrepancies between radiological and clinical evaluations . Here’s how [18 F]FDG PET/CT might influence the decision to perform RLT. Tumor Metabolic Activity: [18 F]FDG PET/CT provides information about the metabolic activity of tumors. NETs are generally slow-growing and may not exhibit high glucose metabolism, making [18 F]FDG PET/CT less sensitive for these tumors. However, in poorly differentiated or more aggressive lesions with higher metabolic activity, [18 F]FDG PET/CT may be used to assess aggressive lesions’ presence, number, and location, guiding treatment decisions towards alternatives to RLT, such as chemotherapy . Tumor Intra and Inter-lesion Heterogeneity: GEP-NETs may exhibit heterogeneity in receptor expression and metabolic activity. Combining information from both radiotracers provides a more comprehensive view of tumor characteristics. For instance, elevated [18 F]FDG PET/CT activity might indicate swift progression in pancreatic NETs, even when early diagnosed or confirmed as well-differentiated. The presence of [18 F]FDG PET/CT uptake could indicate undifferentiated disease foci, significantly impacting therapy response and prognosis . Lesions showing matched SSTR imaging with SSTR-PET and [18 F]FDG PET/CT uptake may suggest a good response probability to RLT, even in combination with chemotherapy . Disease staging, monitoring, and therapeutic decision-making: the decision to perform RLT is based on the presence of SSTRs on tumor cells. If GEP-NETs show SSTR expression, RLT may be considered. However, in cases of uncertain diagnostic presentations (such as non-conclusive findings in CT, MRI, or SSTR-PET) or rapid clinical progression, it is advisable to also perform [18 F]FDG PET/CT for a comprehensive overview of the multi-metastatic disease. Ultimately, the decision to perform RLT is multifaceted and should be made in consultation with a multidisciplinary team of specialists, considering the specific characteristics of the patient’s tumors and their responses to various imaging modalities and previous therapies. The goal is to tailor the treatment plan to the individual patient’s needs and the characteristics of their neuroendocrine lesions. [18 F]FDG PET/CT is recommended before RLT in cases with heterogeneous uptake at SSTR-PET, and in patients with suspicion of rapidly progressive disease (3b - A). The phase 3 trials conducted on patients with intestinal NET reported that median PFS was not reached for RLT with 177Lu-Dotatate, while it was 11 months and 16.4 months for everolimus in non-functioning and functioning tumors, respectively . Although these studies were designed on populations that are not directly comparable, the higher anti-proliferative efficacy of RLT compared with everolimus is now well established. This constitutes the first and most significant evidence in favor of choosing RLT after the failure of SSA treatment. The ORR was significantly higher with RLT than with everolimus . In patients with advanced panNET initially considered unresectable or borderline, neoadjuvant treatment with 177Lu-Dotatate enabled successful surgery in 31% of cases . Therefore, early use of RLT can alter these tumors’ natural history. Patients with GEP-NET who are candidates to receive SSA as first-line therapy typically present with low-proliferating tumors and a long life expectancy. In this setting, the second-line therapy needs to be effective, but safety is of primary importance to avoid serious adverse events and related treatment interruptions or withdrawals. The ultimate goal is to achieve long-term tumor stabilization and a good QoL. For this purpose, RLT offers a better risk/benefit ratio than targeted therapies. By comparing different therapeutic sequences, RLT was found to be safer than either everolimus or chemotherapy as a second-line therapy . From the patient’s perspective, a French national survey indicated that RLT had the best median perceived tolerance compared to all other treatments, including everolimus, sunitinib, and chemotherapy . On the other hand, toxicity, rather than tumor progression, was the most frequent reason for discontinuation of everolimus and sunitinib . The long-term safety results of the NETTER-1 trial confirmed that 177Lu-Dotatate is safe, and no new serious adverse events were reported during the long-term follow-up . Beyond the low toxicity rate, RLT has been reported to significantly impact health-related quality of life in large randomized trials performed in gastroenteropancreatic NETs, improving both global health status and specific symptoms . The phase II non-comparative OCLURANDOM study recently randomized patients with advanced, progressive, SSTR-positive panNET to receive either 177Lu-DOTATATE or sunitinib. The 12-month PFS rate was 80.5% in the RLT arm versus 42% in the sunitinib arm , thus confirming that RLT outperforms targeted agents in patients progressive on first-line therapy with SSA. Two prospective, randomized, phase II trials (COMPETE and COMPOSE) are currently underway to compare the efficacy of RLT versus everolimus or versus the best standard of care (chemotherapy or everolimus, according to the investigator’s choice) in patients with unresectable progressive GEP-NETs (ClinicalTrials.gov NCT03049189 and NCT04919226). In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over targeted agents (everolimus or sunitinib) after the failure of SSA due to its better-expected efficacy and safety profile (2b - B). Both retrospective and prospective evidences indicate that chemotherapy is effective in treating GEP-NETs . Specifically, alkylating agents such as streptozocin, dacarbazine, and temozolomide (alone or in combination with capecitabine) have demonstrated antitumor activity in panNETs . The prospective ECOG-ACRIN E2211 phase II trial recently compared temozolomide alone to temozolomide plus capecitabine in 144 patients with advanced progressive G1-G2 panNETs. The study showed a significant improvement in PFS in the combination arm (median PFS 22.7 vs. 14.4 months respectively) and a trend towards improved ORR (40% vs. 34%) and median OS (58.7 vs. 53.8 months, respectively), although 45% of patients experienced G3/G4 toxicity . While most well-differentiated gastrointestinal NETs tend to be resistant to alkylating agents, fluoropyrimidine-based combinations (e.g., FOLFOX) show antitumor activity in this patient population, potentially causing rapid tumor shrinkage . A large, multicenter, retrospective study of 508 patients with advanced GEP-NETs recently showed that second-line therapy with RLT was associated with improved PFS compared to targeted therapies or chemotherapy (median 2.2 years [95% CI, 1.8–2.8 years] vs. 0.6 years [95% CI, 0.4-1.0 years] respectively in the matched population; P < 0.001). This effect was consistent across different primary sites and hormonal statuses, though the advantage in PFS was not observed in tumors with a Ki-67 greater than 10% . According to retrospective evidence, RLT is associated with improved survival outcomes in patients who did not receive chemotherapy before RLT initiation . Several clinical trials are currently comparing RLT with chemotherapy in patients with progressive disease (NCT05247905, NCT04919226), and results are eagerly awaited. Overall, many factors should be considered when choosing between RLT and chemotherapy in patients who are progressive on first-line SSA therapy. These include the pace of tumor growth and the need for rapid tumor shrinkage. While the density of SSTR expression by SSTR-PET scan can accurately preselect the patients most likely to respond to RLT, methylguanine-DNA methyltransferase testing might be helpful in predicting response to temozolomide-based regimens. In patients with progressive G1-G2 GEP-NETs, RLT should be preferred as a second-line treatment over chemotherapy after the failure of SSA. However, chemotherapy remains an option to consider in the treatment of panNET patients who have a high tumor burden and/or the presence of tumor-related symptoms, or in cases of rapid progression, regardless of the primary tumor site (3b - A). While it is well-established that escalating the dose of SSA can enhance symptom control in functioning tumors when the standard SSA dosage proves ineffective, the actual impact of increased SSA dosages on tumor growth, particularly in the clinical context of non-functioning tumors, remains ambiguous. Until recently, selecting a second-line therapy after the standard SSA dose fails in well-differentiated G1-G2 GEP-NETs was notably challenging. Earlier retrospective studies suggested a potential improvement in PFS with increased SSA doses . However, this observation was not corroborated in prospective studies involving patients with radiologically confirmed progressive disease under standard SSA doses. In such clinical scenarios, the reported median PFS values, as indicated by the CLARINET FORTE study and the control arms of the NETTER-1 trial , ranged between 5 and 8 months. A recent meta-analysis examining 783 patients in 11 studies found that the proportion of patients experiencing disease progression under high-dose SSA was 62% (with a 95% confidence interval ranging between 53% and 70%) per 100 subjects treated annually . Conversely, in the same clinical scenario of progressive well-differentiated GEP-NETs, RLT demonstrated a significantly higher PFS rate, as observed in both randomized controlled trials and real-world study settings. Data from the phase-3 NETTER-1 trial, where the median PFS was not reached in the initial analysis and was estimated at 25 months in the final analysis , aligns with findings from retrospective multicenter studies. These studies reported a median PFS of approximately 2.5 years . A similar trend was observed when considering the ORR as an endpoint. In the context of high-dose SSA, although earlier retrospective small-scale studies reported promising objective response rates of up to 31% , prospective trials indicated a significantly lower likelihood of achieving an objective tumor response, with rates ranging between 3 and 4% . On the other hand, when analyzing the ORR for RLT, the values vary significantly. The NETTER-1 study reported a rate of 18% , while the larger retrospective study by Brabander et al. indicated a range between 31 and 58% . Based on these considerations, RLT has demonstrated greater efficacy compared to high-dose SSA in the various clinical settings evaluated, including both RCTs and retrospective real-world studies. This superiority is evident in terms of both PFS and ORR. In patients with progressive G1-G2 GEP-NETs, RLT is recommended as a second-line treatment over high-dose SSA after the failure of standard dose SSA due to its better expected efficacy. High-dose SSA remains an option as a temporary bridge until RLT initiation or in patients unfit for other antitumor treatments due to comorbidities (1b - A). 3D imaging, particularly through contrast-enhanced CT or MRI, is the main method for evaluating treatment response by observing changes in lesion dimensions over time . Tumor size measurements are primarily conducted according to the Response Evaluation Criteria in Solid Tumours version 1.1 (RECIST 1.1) . However, assessing treatment response based solely on changes in tumor size presents several challenges, especially with GEP-NETs. These tumors may stabilize or initially increase in size even when responding to treatment. Additionally, the occurrence of central tumor necrosis frequently reported during RLT complicates assessments with radiological criteria due to the ‘false-positive’ increases. Furthermore, shrinkage following RLT can be a delayed occurrence . These factors underscore the limitations associated with RECIST 1.1 criteria, suggesting that their use in evaluating slow-growing neoplasms such as GEP-NETs should be cautiously approached. To address these limitations, the Choi criteria have been introduced, assessing both the dimensional changes and the density variation of lesions in CT images with contrast enhancement. Numerous studies comparing the two criteria for NET evaluation consistently show equal or markedly superior results for Choi versus RECIST . However, it is important to note that while the arterial phase of CT is most commonly used in assessing GEP-NETs, considering their vascularity, the Choi criteria rely on images obtained during the portal venous phase . This discrepancy represents a major limitation in applying the Choi criteria in the neuroendocrine context. In light of these challenges, new methods have been proposed to assess therapy response, including the application of long-established tools used for evaluating growth rates in other neoplastic pathologies . The tumor growth rate (TGR) is one emerging tool based on the variation in the volume of target lesions, normalized for the time between two radiological assessments (CT or MRI). Recent studies have also highlighted its application in the neuroendocrine field , showing that baseline TGR highlights the heterogeneity of well-differentiated GEP-NETs and predicts increases in Ki-67 index over time . Additionally, Weber M et al. evaluated the utility of hybrid techniques such as SSTR-PET/MRI in a small sample study. The results suggest that pre-therapeutic SSTR-PET/MRI may not be a reliable predictor of treatment response to RLT in NET patients. Conversely, patients treated with SSA exhibit variations in the apparent diffusion coefficient map on MRI imaging compared to those treated with RLT. Finally, features extracted from SSTR-PET/MRI performed before RLT were not good predictors of treatment response . RECIST 1.1 criteria, evaluated by contrast-enhanced CT or MRI, should be used to monitor the efficacy of RLT during follow-up. Attention should also be paid to changes in tumor lesion morphology beyond modifications in their size (3b - A). Frailty is a syndrome with complex multifactorial physiopathology affecting up to 17% of the geriatric population . This clinical status implies major vulnerability across multiple health domains, including weakness, decreased functional performance, unintentional weight loss, cognitive impairment, increased risk of comorbidities, and organ dysfunction, leading to adverse health outcomes . As the prevalence of GEP-NETs and the elderly population rate increase globally, it is reasonable to hypothesize that a progressively higher proportion of patients with GEP-NETs will be frail. Data from the Surveillance, Epidemiology, and End Results (SEER) analysis of 29,664 GEP-NET cases showed that the median age at diagnosis was 63 years, with the peak incidence observed at age 80. Additionally, another database analysis of 22,744 cases revealed the highest incidence rate of GEP-NETs in patients over 70 years old, with 16–17 cases per 100,000 . The frail oncological population tends to receive delayed or incomplete diagnostic evaluations and often suboptimal therapy, considering the patient’s comorbidities and major risk of toxicity or complications, leading to an unfavorable therapeutic risk/benefit ratio . Regarding RLT, frail patients more commonly present with altered renal function or hematological disorders, thus tending to be less frequently eligible for RLT. Currently, there are no standardized recommendations in the literature regarding using RLT in frail patients. Theiler et al. conducted a retrospective matched cohort study to assess the efficacy and safety of RLT with 90Y-DOTATOC or 177Lu-DOTATATE in elderly patients over 79 years old affected by well-differentiated G1 or G2, SSTR-positive NETs compared to their younger counterparts. The exclusion criteria included ECOG performance status ≥ 3, hematological impairment (hemoglobin < 80 g/L, platelet count < 75 × 10 9 /L), reduced eGFR (< 45 mL/min), or increased levels of AST/ALT (> 3 times upper range of normal). Overall, despite a higher baseline rate of comorbidities, renal and hematological impairment, and a lower ECOG performance status in the elderly cohort, RLT was found to be an effective strategy with a similar toxicity profile in both groups. Nevertheless, long-term adverse events, particularly renal dysfunction when administered 90Y-DOTATOC rather than 177Lu-DOTATATE, cannot be completely ruled out. No statistically significant differences were observed regarding the OS. The median OS in the elderly and younger group was respectively 3.4 years and 6.0 years ( p = 0.094) . These results suggest that RLT may be a valid and relatively safe therapeutic option in a carefully selected cohort of frail patients. However, more robust and large-cohort studies are warranted to explore the risk/benefit ratio, also in the long-term, of RLT in this subgroup of patients. Such initiatives would be of remarkable impact, considering that alternative medical options such as targeted drugs (everolimus or sunitinib) or systemic chemotherapy are generally associated with higher toxicity and deterioration of QoL. An interdisciplinary and multidimensional approach is fundamental to guide therapeutic decisions in such a vulnerable population, especially when standardized guidelines are lacking. To provide the best care for frail individuals, it is necessary to scrupulously identify adequately eligible patients. Therefore, in a multidisciplinary context, validated assessment tools should be implemented to prudently evaluate important domains such as functional, cognitive, and nutritional status, potential limitations in activities of daily living, social settings, and comorbidities. RLT should also be considered in frail patients as a valid therapeutic option despite the lack of specific supporting data. It is reasonable, especially in the elderly population with comorbidities, to pay greater attention to renal function and potential marrow toxicity before initiating therapy (5 - B). Retrospective evidence suggested that RLT can be a relevant therapeutic option in patients with SSTR-positive G3 GEP-NETs, leading to disease control rates ranging between 30% and 80% and median PFS between 9 and 23 months . In the recent NETTER-2 trial, which evaluated 226 enrolled patients, 35% had G3 tumors. Overall, treatment with RLT was associated with a significant improvement in PFS (median PFS: 8.5 months in the control arm versus 22.8 months in the investigational arm; stratified HR: 0.28, p < 0.0001) and ORR (9.3% in the control arm versus 43% in the investigational arm; stratified OR: 7.81, p < 0.0001) . Notably, PFS and ORR improvements were consistent across all pre-specified subgroups, including the G3 subgroup. Based on these results, it is likely that first-line treatment with RLT will be approved soon by regulatory authorities, becoming the first standard treatment option supported by high-level evidence for patients with advanced, G2-G3, SSTR-positive GEP-NETs. Another prospective phase III trial, the COMPOSE trial, is currently underway to compare first or second-line RLT versus the best standard of care (chemotherapy or everolimus according to investigator’s choice) in patients with either G2 or G3 unresectable SSTR-positive GEP-NETs . The trial results are eagerly awaited, as they will provide much-needed information on treatment sequencing also in patients with G3 GEP-NETs. No high-level evidence of antitumor activity currently exists for treatment modalities alternative to RLT in patients with metastatic G3 GEP-NETs. According to retrospective data and in light of the recent results of the NETTER-2 trial , SSA may exert some antiproliferative activity in patients with G3 GEP-NETs, although with significantly inferior outcomes compared to RLT. On the other hand, small series have documented the activity of either sunitinib or everolimus (alone or in combination with temozolomide) in G3 GEP-NETs . Alkylating-based (i.e., CAPTEM or STZ/5-FU) and fluoropyrimidine-based (i.e., FOLFOX) chemotherapy protocols appear effective in patients with G3 GEP-NETs . According to retrospective evidence, the CAPTEM regimen is associated with a median PFS ranging between 9 and 15 months in patients with advanced G3 tumors of the digestive tract . Responses to temozolomide-based regimens appear more frequent in the first-line setting and in pancreatic primaries. The efficacy of etoposide-platinum chemotherapy appears limited in advanced G3 NETs, with the response rate in this population inferior to that observed in patients with poorly differentiated NECs . Overall, RLT might be currently considered as a preferred option in the first-line treatment of patients with advanced SSTR-positive G3 GEP-NETs. Chemotherapy, particularly alkylating-based regimens, might be reserved to SSTR-negative G3 NETs or to patients progressing on RLT. As soon as RLT is approved by regulatory authorities, it should be considered a valid option for patients with G2-G3 GEP-NETs expressing SSTR (1b - A). The rationale for repeating RLT in patients with GEP-NETs involves several factors. The decision is typically individualized, based on a combination of clinical assessments, imaging, and biochemical evaluations. If there is evidence of disease progression or recurrence following the initial course of RLT, a repeat treatment may be considered to target new or recurrent lesions. Initially, an SSTR-PET evaluation should be conducted to confirm the presence of somatostatin receptors on the NET lesions. According to the Delphi consensus, a partial response or stable disease must have been achieved for at least one year after the first RLT treatment . To accurately determine which patients could benefit from retreatment, implementing dosimetry in clinical practice is crucial. Dosimetry correlates tumor-absorbed doses and treatment effectiveness, especially in larger tumors . Recent studies have demonstrated the safety and efficacy of an RLT rechallenge with dosimetry calculations based on healthy organs such as the kidneys and bone marrow . These findings suggest that incorporating personalized dosimetry, aimed at identifying organs with dose limits and determining the maximum tolerated accumulated activity, can enhance standard clinical practices by ensuring that therapeutic doses stay within safe limits for healthy organs. Notably, patients who reached the maximum tolerable absorbed dose of 23 Gy in their kidneys experienced nearly double the median PFS and OS . This highlights the significant potential benefits of adopting a personalized approach over fixed dosing in terms of oncological outcomes. The decision to repeat RLT is complex and requires careful consideration of various factors. Regular follow-up assessments, imaging studies, and ongoing communication between the patient and the dedicated tumor board are crucial for determining the most appropriate course of action in managing NETs. Although not yet approved by regulatory authorities, retreatment with RLT should be considered a valid therapeutic option for those patients who had a favorable response to initial RLT at the time of disease progression. Dosimetry data, including initial RLT, should be used to tailor the personalized dose for the retreatment approach (3b - B). This position paper strongly advocates for the early integration of RLT into the treatment regimen for advanced SSTR-positive GEP-NETs following the failure of SSA. Before initiating RLT, [18 F]FDG PET/CT is recommended for patients with heterogeneous uptake on SSTR-PET or those suspected of rapid tumor progression. RLT with 177Lu-DOTATATE stands out as the preferred second-line treatment over targeted therapies, chemotherapy, or high-dose SSA for progressive G1-G2 GEP-NETs thanks to its superior efficacy and safety profile. This recommendation applies provided that the disease homogeneously expresses SSTRs, is not rapidly progressing, or is not highly symptomatic. To assess the effectiveness of RLT, RECIST 1.1 criteria through contrast-enhanced CT or MRI are advised, emphasizing changes in tumor morphology. Looking forward, it is anticipated that upon regulatory approval, RLT will be considered a valid treatment option for patients with well-differentiated high-grade SSTR-positive GEP-NETs. Additionally, retreatment with RLT will be suggested for those who have shown a favorable response to the initial treatment upon disease progression, ideally using tailored dosimetry. The key messages from this position paper are summarized in Table . Below is the link to the electronic supplementary material. Supplementary Material 1 |
Equitable Access to Genomic Molecular Testing for Australian Cancer Patients: Insights from the Victorian Precision Oncology Summit | 245c601f-1131-4a78-bf9e-f8ec09ffaa25 | 11352575 | Internal Medicine[mh] | Genomic molecular testing for cancer driver aberrations has transformed cancer care, including diagnosis and treatment planning, and ranges from single gene tests to comprehensive whole genome sequencing (WGS). And yet, there are significant barriers to more widespread uptake of molecular testing. In Australia, some single gene and small gene panels are reimbursed by the federal government via the Medicare Benefits Schedule (MBS) . However, cancer care is increasingly moving towards the use of comprehensive genomic profiling (CGP) in the form of large gene panels (such as the Illumina TSO500™ or Foundation Medicine assays) and WGS. Such tests are increasingly being used by Australian oncologists and haematologists to help guide treatment decisions but are currently not reimbursed and are only offered through research programs or privately funded by individuals. Uncertainty around which tests are reimbursed, where the tests are available, or even which tests are appropriate for a given patient, is preventing widespread adoption of CGP . The large geographical expanse of Australia also represents an additional challenge to equitable access as CGP is mainly offered in metropolitan centres through research programs. Unless archival tissue is readily available, patients from rural and regional areas often must travel hundreds of kilometres to have the biopsies required for CGP. Patient awareness and understanding of genomic molecular testing is another hurdle to greater uptake. A recent review of 21 studies assessing patient experiences and expectations of tumour multigene next-generation sequencing found that most patients have a poor understanding of molecular testing, with many confusing germline and somatic testing . Fear of genetic discrimination by insurance providers or the perceived consequences of inherited genetic mutations to their family members have been identified as barriers to uptake of testing . However, incidental germline mutations are rarely detected as part of CGP . In Australia, genetic tests have no impact on eligibility for health insurance, and since 2019 there has been a partial moratorium in place restricting genetic test results from affecting life insurance products (although this is set to expire in 2024) . Importantly, there have been limited efforts to address cultural or linguistic barriers for Aboriginal and Torres-Strait Islander and Culturally and Linguistically Diverse (CALD) communities accessing CGP . The Victorian Precision Oncology Summit, convened in April 2023, was a joint initiative between the Victorian Comprehensive Cancer Centre Alliance (VCCC Alliance) and Monash Partners Comprehensive Cancer Consortium (MPCCC) and was proposed to guide a coordinated state-wide conversation about how the sector can overcome some of the current obstacles in achieving equity of access to clinical cancer genomics for Victorians. For the Summit, it was decided to focus on those tests that are not currently reimbursed by the MBS, such as CGP. The event also served as a large consultation piece for the development of a roadmap to explore equitable access to molecular testing for Victorian patients, currently in development by the VCCC Alliance and MPCCC in collaboration with other key Victorian and national stakeholders. Summit Organisation and Participants The Precision Oncology Summit was co-convened by the VCCC Alliance and MPCCC, expanding the reach of the event across both networks. Collectively, these strategic alliances represent 20 research, academic and clinical institutions in Victoria with a common goal of improving outcomes for cancer patients. Representatives from both entities came together to hear expert presentations and to workshop ways to work towards a more equitable framework of access to molecular testing, with a specific focus on non-reimbursed molecular testing, or CGP. A Summit Steering Group was established at the start of the project, comprising 18 experts in the field, including 2 consumers. The Summit was a hybrid event (online and in person) open to all and promoted across the VCCC Alliance and MPCCC networks. There were over 150 attendees (66 in person, 86 online) who were a mixture of medical oncologists, pathologists, nurses, researchers, molecular testing curators, industry, government representatives, education providers, consumers, and other health professionals. The geographical representation of attendees was 82% metropolitan, 6% regional, 11% interstate and 1% international. The Precision Oncology Summit was co-convened by the VCCC Alliance and MPCCC, expanding the reach of the event across both networks. Collectively, these strategic alliances represent 20 research, academic and clinical institutions in Victoria with a common goal of improving outcomes for cancer patients. Representatives from both entities came together to hear expert presentations and to workshop ways to work towards a more equitable framework of access to molecular testing, with a specific focus on non-reimbursed molecular testing, or CGP. A Summit Steering Group was established at the start of the project, comprising 18 experts in the field, including 2 consumers. The Summit was a hybrid event (online and in person) open to all and promoted across the VCCC Alliance and MPCCC networks. There were over 150 attendees (66 in person, 86 online) who were a mixture of medical oncologists, pathologists, nurses, researchers, molecular testing curators, industry, government representatives, education providers, consumers, and other health professionals. The geographical representation of attendees was 82% metropolitan, 6% regional, 11% interstate and 1% international. After a series of keynote presentations from national and international speakers (including a consumer perspective), attendees of the summit separated into break-out groups to discuss six workshop topics (two in person groups and one online group per topic). Prior to the Summit, the Precision Oncology Summit Steering Group initiated a targeted scoping survey, disseminated via email to the Directors of Oncology and Haematology Clinical Services across Victoria, to ensure the program and discussion topics were needs-based . Respondents were specifically queried regarding their perspectives on enhancing accessibility to non-reimbursed molecular testing, thereby shaping the focal points of discussion at the Summit. The topics covered were: (1) access to non-reimbursed molecular testing, (2) molecular testing variability, (3) data collection, (4) molecular reporting and clinical utility, (5) clinician awareness and literacy, and (6) consumer awareness and literacy. Before the Summit, the attendees were provided with discussion guides summarising the topics and were asked for input on key questions that had been approved by the Summit Steering Committee. Outcomes of the breakout discussions were synthesised and delineated into overarching themes that arose across the discussion topics, namely, 1. workforce education, 2. patient education and awareness, 3. standardisation, 4. centralisation, 5. funding, and 6. data and sharing (detailed below, and summarised in ). While the insights garnered from the summit were extensive, it is acknowledged that there will be other key considerations (e.g., bioinformatics and the proper training of clinical bioinformaticians) that were not specifically discussed at the Summit but that would be important in the optimal implementation of CGP into complex health systems. 2.1. Workforce Education Education of the cancer workforce is important for achieving equitable access to precision oncology. Providing clinicians with comprehensive education on molecular testing, including its applications, interpretation, effective patient communication, clinical decision making, and fundamental genetics and genomics, will enhance their confidence and competence in selecting appropriate tests. This education needs to cover the selection of molecular testing panels, understanding the diagnostic or therapeutic purpose of the test, and the limitations of different testing approaches. Significant strains in workforce education are associated with keeping abreast of this rapidly evolving field where genomics is being integrated into routine care and decreasing costs are increasing demands . Challenges include a lack of a standardised curriculum, limited access to high quality resources and educators, limited interdisciplinary training opportunities, ethical and regulatory considerations, integrating complex content like data science, and maintaining sustainability. Discussion Outcomes The available resources for upskilling in precision oncology are characterised by a lack of formal guidelines, diverse providers with significant quality disparities, and a predominantly ad hoc and self-directed approach to learning. Upskilling in this area is often overwhelming for clinicians, and a systemic, more structured approach to guide health professionals on the best pathways for upskilling is needed. Such centralised education pathways could potentially be implemented through organisations such as the National Health and Medical Research Council (NHMRC) or Royal Australian College of Physicians (RACP), or by incorporating precision oncology as a formal degree requirement, as put forward by the Human Genetics Society of Australasia to integrate genomics and molecular testing into the training curriculum for clinicians, including pathologists and oncologists as one example; however, it was also highlighted that ongoing training through professional development would be required due to the highly dynamic nature of genomics. Ways to further increase engagement could involve utilising gamified online learning tools, applying genomic specific requirements within continuing professional development medical, nursing, and allied health frameworks, or providing focused sessions at conferences that start with the basics of precision oncology. Collaboration among hospitals across all regions is vital to share learnings and avoid fragmented pathways. Standardised guidelines, policies and practices should be implemented and publicised through dedicated healthcare communication channels. The field of molecular testing is constantly evolving, and the discussion emphasised how challenging it is to determine what level of detail is relevant for various sub-specialities, along with the modality, format, and platforms that it will be delivered on. Who should author the educational materials was also discussed and it was noted that expertise and credibility should be prioritised, and the formation of a multidisciplinary subject matter team is crucial to provide authoritative guidance, content, and effective delivery methods. It is unreasonable to expect that all healthcare professionals will have a detailed, up to date knowledge of molecular testing; therefore, upskilling the majority in general knowledge (for example, knowing that tests exists and where to seek assistance), whilst providing specialised knowledge and skills for a limited number of professionals who require it may be more realistic and efficient. It is essential to carefully address sustainability measures to keep content current and readily accessible whilst avoiding creating isolated knowledge silos within the healthcare community. Participants also recommended addressing cancer workforce skills shortages, particularly in data curation. With more molecular testing there will be a greater need for data curators with specialised oncology training. A significant lack of molecular pathologists specialising in the solid tumour space was also highlighted. Molecular pathology requires specific expertise, and the scarcity of professionals in this field, at least currently in Australia, hinders the widespread implementation of formal education programs and comprehensive workforce development. 2.2. Patient Education and Awareness Consumer awareness of, and literacy in, molecular testing and its implications for their care is important for improving autonomy, advocacy, and patient outcomes. Improved understanding of which groups (e.g., different geographical groups, CALD groups, etc.) are not accessing precision oncology will be important so that communication can be tailored in an appropriate format to reach those most in need. Discussion Outcomes A strong link between consumer and oncologist was highlighted as being very important by discussion participants, a fact supported by the recent literature . However, support networks and other allied health professionals were also suggested to ensure consumers feel empowered to ask questions and make informed decisions about how molecular testing could benefit their treatment. To achieve this, it was noted that clinicians (including primary care clinicians) and nurses need to better understand patient needs and have higher levels of awareness of the benefits of molecular testing and support to disseminate this information to patients. A recent review of patient experiences of molecular testing noted that patients prefer to discuss their molecular testing results with an oncologist or nurse, rather than receiving written, internet-based, or video-based resources to explain the findings . Nurses as educators and pathway navigators was noted as being highly valued by Summit participants and could be a potential avenue for increasing patient education. The role of the consumer as an advocate was also raised in the discussion groups. Consumer advocates provide an important avenue to disseminate information and inspire and educate others. Harnessing community connections could assist consumers in understanding the options available to them, along with providing support to access information from existing credible resources such as Cancer Australia and the Cancer Councils. Videos and blogs of consumer stories could be another way to help raise awareness. Providing consumers with a platform to advocate for issues important to them, and to help shape policy direction, is critical; however, consideration must be given to who should act in this role and how adequate representation across the spectrum of consumers could be achieved. Improved coordination of support groups, and the establishment of a consumer alliance, would be one way to facilitate this. Developing a clear, purposeful, and collaborative consumer advocacy strategy will be essential for maximising the impact of the consumer voice and for improving consumer awareness of the benefits of precision oncology. Considerations should include consultation with diverse populations, development of training and education programs for consumer advocates, and centralised coordination of advocacy groups to lobby for positive change more powerfully. 2.3. Standardisation Developing standardised guidelines and frameworks is essential to provide clinicians with the confidence to order molecular testing for their patients. Consistency in terminology, reporting, and outcomes (as they relate to clinical significance and local clinical trial information) is crucial to mitigate against misinterpretation and improve the quality and reliability of the test results. Establishing quality benchmarking and optimal care pathways in precision oncology will be important for embedding genomics and precision oncology into routine clinical workflows and for improving equitable care for patients. Discussion Outcomes Variations in data collection practices were noted by Summit discussion participants as a major issue, as they give rise to disparate datasets that impede the effective exchange and analysis of information. This situation can be attributed primarily to the absence of standardised reporting mechanisms and a shared language specifically tailored for molecular testing . Participants agreed that the current, National Association of Testing Authorities (NATA)-mandated minimum requirements are currently too broad, and inclusions such as mutations described using Human Genome Variation Society (HGVS) nomenclature, clinical significance tiered according to the Association for Molecular Pathology guidelines, and information about local clinical trials, were all recommended. Implementing standardised reporting and collection methods will be essential for ensuring consistency and interoperability across different molecular testing practices. Developing templates that capture relevant clinical and patient information, tailored to specific cancer streams or panel sizes, will enable more accurate and uniform data collection. A recent survey of over 100 Australian oncologists reported a recommendation to include therapy recommendations and gene rankings that are evidence-based in comprehensive genomic profiling reports as standard practice . Establishing a shared language for data reporting will facilitate meaningful data exchange and analysis. It was also suggested that promoting a closer network and collaboration among clinicians could enhance the understanding and utilisation of molecular testing. Establishing forums, conferences, and platforms for knowledge sharing and discussion would facilitate the exchange of best practices and improve the consistency and standardisation of testing approaches. Improving the awareness and accessibility of molecular tumour boards was also suggested as an enabler towards the greater equity of access to molecular tests. 2.4. Centralisation Centralisation was a common theme that emerged across several topics at the Summit, with participants calling for centralised platforms of information relating to the molecular tests themselves, as well as around clinical trials, data interpretation tools and education resources. Discussion Outcomes The rapid rate of change within precision oncology is a barrier to clinicians maintaining up to date knowledge on the appropriate test, interpretation of that test and access to potential clinical trials. Access to a centralised resource of experts in precision oncology that can guide decision making and advise on access to treatments and trials, as well as a monitored, broadly available troubleshooting platform and/or decision tree are required to better support clinicians to provide equitable care for all cancer patients. The centralised approach to tissue typing and the National Coronavirus Helpline were identified as existing models that such a resource could be based on. This could also be achieved through a clinical fellow ‘on call’ through the VCCC Alliance and/or MPCCC that could provide personalised support, and regional “champions” skilled in precision oncology to support a given regional area. Molecular Tumour Boards were again identified as an important educational and peer-support tool. Currently, informal networks (such as instant messenger groups) are available to some clinicians, but it was noted that the ad hoc nature of this is perpetuating inequity. In terms of the tests themselves, creating a centralised platform entity that integrates pathology services and outlines guidelines for specific tumour types may address the issue of fragmented laboratories. This platform could serve as a repository of information, including recommended tests, tissue type requirements, and turnaround times. Encouraging collaboration and cooperation among pathology laboratories through the establishment of pathology networks could help coordinate testing efforts and prevent fragmented and duplicative approaches. This integrated approach would facilitate the development of common structures, guidelines, and quality assurance measures for molecular testing. Establishing a central registry through data linkage would enable more comprehensive data aggregation and analysis, support population-level research, and provide a broader perspective on molecular testing outcomes that could drive future innovation. A centralised registry would reduce the risk of health data loss when patient care transitions between different healthcare providers. A barrier to centralisation identified across different discussion groups was that databases are not well maintained and can be difficult to navigate. Development of a centralised website, other online resource or an application that brings together all local information about available genomic tests and clinical trials and mandates the inclusion of clinical trial information would likely be a strong value-add. More regular updates for new available clinical trials and harnessing artificial intelligence to better maintain websites were noted as potential ways to support currency of information. 2.5. Funding Developing a cost-effective centralised system and infrastructure is vital for equitable access. A collaborative effort between consumers, the health sector, and government should provide clear guidelines and procedures for accessing non-reimbursed molecular testing, ensuring transparency and equal opportunity for all individuals. Collaboration and financial support from government and industry could contribute to education, research, and resources, ultimately benefiting patients and advancing the field of molecular testing. Discussion Outcomes It was noted that health services need to work more closely with universities and research institutions to enhance access to non-reimbursed molecular testing. In the discussion groups (including both clinicians and consumers) there was a plea for there to be increased effort to change the perception of non-reimbursed molecular testing as being purely for research purposes, as outlined recently . Partnerships (including joint funding) between government and industry may be the key to achieving more access to testing as well as off-label therapeutic options for some patients. Advocacy from both consumer groups and the health sector will play a vital role in making non-reimbursed testing a part of standard care and creating a centralised approach to testing, ensuring equal accessibility to resources for different organisations. The discussion participants did note that a push to get CGP reimbursed will require overcoming many challenges. Implementing non-reimbursed molecular testing as the standard of care for all cancer patients raises questions about funding and scalability. It is important to determine who these tests will be funded by and how the results will be incorporated into patient care plans. This would need to link back to workforce education and how changes would be practically disseminated to clinicians. Inconsistent funding models limit the resources allocated to education initiatives. The absence of a single organisation responsible for the implementation of standardised education programs also adds complexity and requires coordination among multiple stakeholders. A collective impact approach is required to reduce duplication and ensure the greatest efficiencies. 2.6. Data and Sharing Data relating to non-reimbursed molecular testing, clinical utility, and patient outcomes in Victoria are not widely or uniformly collected or accessible for interrogation. Promoting data sharing among all stakeholders poses notable challenges. Presently, the absence of a clear linkage between data inputs and patient outcomes hinders the perceived benefits of data recording. Furthermore, mandating the collection of health data into a centralised repository will necessitate additional resources, thereby requiring a governing body to provide the necessary funding. Discussion Outcomes A significant barrier identified in the Summit discussion groups was the need to ensure data security and privacy when sharing molecular data across different healthcare institutions. Establishing secure data-sharing frameworks, data access controls, and compliance with relevant privacy regulations poses challenges that are important to overcome. Enacting legislation to standardise health data sharing was suggested by some participants as it would likely play a significant role in promoting data interoperability and privacy protections. Legal and policy frameworks would provide guidelines, requirements, and incentives for healthcare organisations to share molecular testing data while maintaining appropriate safeguards. Such legislation could facilitate more secure data exchange, enable research collaborations, and promote greater transparency. It was felt that demonstrating how data sharing leads to improved patient care, informed decision-making, and scientific advancements would encourage stakeholders to actively participate in data collection and data sharing initiatives. Outcome data may also lead to increased investment from pharmaceutical companies and other industry stakeholders who would be more likely to then support the sustainability and viability of data registries over the longer term. Governing bodies would also be able to track the progress of equity policies and frameworks. It was noted by participants that infrastructure such as the Victorian Cancer Registry already exists and could potentially be leveraged to include data pertaining to molecular testing that is already being collected, including through registry trials. Scoping work is required to identify existing resources that can be utilised, with the goal of avoiding duplication of data sets and the associated security risks that may arise. Engaging patients as active participants in data collection processes, such as through patient-reported outcomes, was suggested by the consumer discussion groups. Such information would enrich the datasets, enhance patient-centred research, and promote patient empowerment. Self-reporting may also overcome disparities in health reporting, particularly between regional and metropolitan patients. Education of the cancer workforce is important for achieving equitable access to precision oncology. Providing clinicians with comprehensive education on molecular testing, including its applications, interpretation, effective patient communication, clinical decision making, and fundamental genetics and genomics, will enhance their confidence and competence in selecting appropriate tests. This education needs to cover the selection of molecular testing panels, understanding the diagnostic or therapeutic purpose of the test, and the limitations of different testing approaches. Significant strains in workforce education are associated with keeping abreast of this rapidly evolving field where genomics is being integrated into routine care and decreasing costs are increasing demands . Challenges include a lack of a standardised curriculum, limited access to high quality resources and educators, limited interdisciplinary training opportunities, ethical and regulatory considerations, integrating complex content like data science, and maintaining sustainability. Discussion Outcomes The available resources for upskilling in precision oncology are characterised by a lack of formal guidelines, diverse providers with significant quality disparities, and a predominantly ad hoc and self-directed approach to learning. Upskilling in this area is often overwhelming for clinicians, and a systemic, more structured approach to guide health professionals on the best pathways for upskilling is needed. Such centralised education pathways could potentially be implemented through organisations such as the National Health and Medical Research Council (NHMRC) or Royal Australian College of Physicians (RACP), or by incorporating precision oncology as a formal degree requirement, as put forward by the Human Genetics Society of Australasia to integrate genomics and molecular testing into the training curriculum for clinicians, including pathologists and oncologists as one example; however, it was also highlighted that ongoing training through professional development would be required due to the highly dynamic nature of genomics. Ways to further increase engagement could involve utilising gamified online learning tools, applying genomic specific requirements within continuing professional development medical, nursing, and allied health frameworks, or providing focused sessions at conferences that start with the basics of precision oncology. Collaboration among hospitals across all regions is vital to share learnings and avoid fragmented pathways. Standardised guidelines, policies and practices should be implemented and publicised through dedicated healthcare communication channels. The field of molecular testing is constantly evolving, and the discussion emphasised how challenging it is to determine what level of detail is relevant for various sub-specialities, along with the modality, format, and platforms that it will be delivered on. Who should author the educational materials was also discussed and it was noted that expertise and credibility should be prioritised, and the formation of a multidisciplinary subject matter team is crucial to provide authoritative guidance, content, and effective delivery methods. It is unreasonable to expect that all healthcare professionals will have a detailed, up to date knowledge of molecular testing; therefore, upskilling the majority in general knowledge (for example, knowing that tests exists and where to seek assistance), whilst providing specialised knowledge and skills for a limited number of professionals who require it may be more realistic and efficient. It is essential to carefully address sustainability measures to keep content current and readily accessible whilst avoiding creating isolated knowledge silos within the healthcare community. Participants also recommended addressing cancer workforce skills shortages, particularly in data curation. With more molecular testing there will be a greater need for data curators with specialised oncology training. A significant lack of molecular pathologists specialising in the solid tumour space was also highlighted. Molecular pathology requires specific expertise, and the scarcity of professionals in this field, at least currently in Australia, hinders the widespread implementation of formal education programs and comprehensive workforce development. The available resources for upskilling in precision oncology are characterised by a lack of formal guidelines, diverse providers with significant quality disparities, and a predominantly ad hoc and self-directed approach to learning. Upskilling in this area is often overwhelming for clinicians, and a systemic, more structured approach to guide health professionals on the best pathways for upskilling is needed. Such centralised education pathways could potentially be implemented through organisations such as the National Health and Medical Research Council (NHMRC) or Royal Australian College of Physicians (RACP), or by incorporating precision oncology as a formal degree requirement, as put forward by the Human Genetics Society of Australasia to integrate genomics and molecular testing into the training curriculum for clinicians, including pathologists and oncologists as one example; however, it was also highlighted that ongoing training through professional development would be required due to the highly dynamic nature of genomics. Ways to further increase engagement could involve utilising gamified online learning tools, applying genomic specific requirements within continuing professional development medical, nursing, and allied health frameworks, or providing focused sessions at conferences that start with the basics of precision oncology. Collaboration among hospitals across all regions is vital to share learnings and avoid fragmented pathways. Standardised guidelines, policies and practices should be implemented and publicised through dedicated healthcare communication channels. The field of molecular testing is constantly evolving, and the discussion emphasised how challenging it is to determine what level of detail is relevant for various sub-specialities, along with the modality, format, and platforms that it will be delivered on. Who should author the educational materials was also discussed and it was noted that expertise and credibility should be prioritised, and the formation of a multidisciplinary subject matter team is crucial to provide authoritative guidance, content, and effective delivery methods. It is unreasonable to expect that all healthcare professionals will have a detailed, up to date knowledge of molecular testing; therefore, upskilling the majority in general knowledge (for example, knowing that tests exists and where to seek assistance), whilst providing specialised knowledge and skills for a limited number of professionals who require it may be more realistic and efficient. It is essential to carefully address sustainability measures to keep content current and readily accessible whilst avoiding creating isolated knowledge silos within the healthcare community. Participants also recommended addressing cancer workforce skills shortages, particularly in data curation. With more molecular testing there will be a greater need for data curators with specialised oncology training. A significant lack of molecular pathologists specialising in the solid tumour space was also highlighted. Molecular pathology requires specific expertise, and the scarcity of professionals in this field, at least currently in Australia, hinders the widespread implementation of formal education programs and comprehensive workforce development. Consumer awareness of, and literacy in, molecular testing and its implications for their care is important for improving autonomy, advocacy, and patient outcomes. Improved understanding of which groups (e.g., different geographical groups, CALD groups, etc.) are not accessing precision oncology will be important so that communication can be tailored in an appropriate format to reach those most in need. Discussion Outcomes A strong link between consumer and oncologist was highlighted as being very important by discussion participants, a fact supported by the recent literature . However, support networks and other allied health professionals were also suggested to ensure consumers feel empowered to ask questions and make informed decisions about how molecular testing could benefit their treatment. To achieve this, it was noted that clinicians (including primary care clinicians) and nurses need to better understand patient needs and have higher levels of awareness of the benefits of molecular testing and support to disseminate this information to patients. A recent review of patient experiences of molecular testing noted that patients prefer to discuss their molecular testing results with an oncologist or nurse, rather than receiving written, internet-based, or video-based resources to explain the findings . Nurses as educators and pathway navigators was noted as being highly valued by Summit participants and could be a potential avenue for increasing patient education. The role of the consumer as an advocate was also raised in the discussion groups. Consumer advocates provide an important avenue to disseminate information and inspire and educate others. Harnessing community connections could assist consumers in understanding the options available to them, along with providing support to access information from existing credible resources such as Cancer Australia and the Cancer Councils. Videos and blogs of consumer stories could be another way to help raise awareness. Providing consumers with a platform to advocate for issues important to them, and to help shape policy direction, is critical; however, consideration must be given to who should act in this role and how adequate representation across the spectrum of consumers could be achieved. Improved coordination of support groups, and the establishment of a consumer alliance, would be one way to facilitate this. Developing a clear, purposeful, and collaborative consumer advocacy strategy will be essential for maximising the impact of the consumer voice and for improving consumer awareness of the benefits of precision oncology. Considerations should include consultation with diverse populations, development of training and education programs for consumer advocates, and centralised coordination of advocacy groups to lobby for positive change more powerfully. A strong link between consumer and oncologist was highlighted as being very important by discussion participants, a fact supported by the recent literature . However, support networks and other allied health professionals were also suggested to ensure consumers feel empowered to ask questions and make informed decisions about how molecular testing could benefit their treatment. To achieve this, it was noted that clinicians (including primary care clinicians) and nurses need to better understand patient needs and have higher levels of awareness of the benefits of molecular testing and support to disseminate this information to patients. A recent review of patient experiences of molecular testing noted that patients prefer to discuss their molecular testing results with an oncologist or nurse, rather than receiving written, internet-based, or video-based resources to explain the findings . Nurses as educators and pathway navigators was noted as being highly valued by Summit participants and could be a potential avenue for increasing patient education. The role of the consumer as an advocate was also raised in the discussion groups. Consumer advocates provide an important avenue to disseminate information and inspire and educate others. Harnessing community connections could assist consumers in understanding the options available to them, along with providing support to access information from existing credible resources such as Cancer Australia and the Cancer Councils. Videos and blogs of consumer stories could be another way to help raise awareness. Providing consumers with a platform to advocate for issues important to them, and to help shape policy direction, is critical; however, consideration must be given to who should act in this role and how adequate representation across the spectrum of consumers could be achieved. Improved coordination of support groups, and the establishment of a consumer alliance, would be one way to facilitate this. Developing a clear, purposeful, and collaborative consumer advocacy strategy will be essential for maximising the impact of the consumer voice and for improving consumer awareness of the benefits of precision oncology. Considerations should include consultation with diverse populations, development of training and education programs for consumer advocates, and centralised coordination of advocacy groups to lobby for positive change more powerfully. Developing standardised guidelines and frameworks is essential to provide clinicians with the confidence to order molecular testing for their patients. Consistency in terminology, reporting, and outcomes (as they relate to clinical significance and local clinical trial information) is crucial to mitigate against misinterpretation and improve the quality and reliability of the test results. Establishing quality benchmarking and optimal care pathways in precision oncology will be important for embedding genomics and precision oncology into routine clinical workflows and for improving equitable care for patients. Discussion Outcomes Variations in data collection practices were noted by Summit discussion participants as a major issue, as they give rise to disparate datasets that impede the effective exchange and analysis of information. This situation can be attributed primarily to the absence of standardised reporting mechanisms and a shared language specifically tailored for molecular testing . Participants agreed that the current, National Association of Testing Authorities (NATA)-mandated minimum requirements are currently too broad, and inclusions such as mutations described using Human Genome Variation Society (HGVS) nomenclature, clinical significance tiered according to the Association for Molecular Pathology guidelines, and information about local clinical trials, were all recommended. Implementing standardised reporting and collection methods will be essential for ensuring consistency and interoperability across different molecular testing practices. Developing templates that capture relevant clinical and patient information, tailored to specific cancer streams or panel sizes, will enable more accurate and uniform data collection. A recent survey of over 100 Australian oncologists reported a recommendation to include therapy recommendations and gene rankings that are evidence-based in comprehensive genomic profiling reports as standard practice . Establishing a shared language for data reporting will facilitate meaningful data exchange and analysis. It was also suggested that promoting a closer network and collaboration among clinicians could enhance the understanding and utilisation of molecular testing. Establishing forums, conferences, and platforms for knowledge sharing and discussion would facilitate the exchange of best practices and improve the consistency and standardisation of testing approaches. Improving the awareness and accessibility of molecular tumour boards was also suggested as an enabler towards the greater equity of access to molecular tests. Variations in data collection practices were noted by Summit discussion participants as a major issue, as they give rise to disparate datasets that impede the effective exchange and analysis of information. This situation can be attributed primarily to the absence of standardised reporting mechanisms and a shared language specifically tailored for molecular testing . Participants agreed that the current, National Association of Testing Authorities (NATA)-mandated minimum requirements are currently too broad, and inclusions such as mutations described using Human Genome Variation Society (HGVS) nomenclature, clinical significance tiered according to the Association for Molecular Pathology guidelines, and information about local clinical trials, were all recommended. Implementing standardised reporting and collection methods will be essential for ensuring consistency and interoperability across different molecular testing practices. Developing templates that capture relevant clinical and patient information, tailored to specific cancer streams or panel sizes, will enable more accurate and uniform data collection. A recent survey of over 100 Australian oncologists reported a recommendation to include therapy recommendations and gene rankings that are evidence-based in comprehensive genomic profiling reports as standard practice . Establishing a shared language for data reporting will facilitate meaningful data exchange and analysis. It was also suggested that promoting a closer network and collaboration among clinicians could enhance the understanding and utilisation of molecular testing. Establishing forums, conferences, and platforms for knowledge sharing and discussion would facilitate the exchange of best practices and improve the consistency and standardisation of testing approaches. Improving the awareness and accessibility of molecular tumour boards was also suggested as an enabler towards the greater equity of access to molecular tests. Centralisation was a common theme that emerged across several topics at the Summit, with participants calling for centralised platforms of information relating to the molecular tests themselves, as well as around clinical trials, data interpretation tools and education resources. Discussion Outcomes The rapid rate of change within precision oncology is a barrier to clinicians maintaining up to date knowledge on the appropriate test, interpretation of that test and access to potential clinical trials. Access to a centralised resource of experts in precision oncology that can guide decision making and advise on access to treatments and trials, as well as a monitored, broadly available troubleshooting platform and/or decision tree are required to better support clinicians to provide equitable care for all cancer patients. The centralised approach to tissue typing and the National Coronavirus Helpline were identified as existing models that such a resource could be based on. This could also be achieved through a clinical fellow ‘on call’ through the VCCC Alliance and/or MPCCC that could provide personalised support, and regional “champions” skilled in precision oncology to support a given regional area. Molecular Tumour Boards were again identified as an important educational and peer-support tool. Currently, informal networks (such as instant messenger groups) are available to some clinicians, but it was noted that the ad hoc nature of this is perpetuating inequity. In terms of the tests themselves, creating a centralised platform entity that integrates pathology services and outlines guidelines for specific tumour types may address the issue of fragmented laboratories. This platform could serve as a repository of information, including recommended tests, tissue type requirements, and turnaround times. Encouraging collaboration and cooperation among pathology laboratories through the establishment of pathology networks could help coordinate testing efforts and prevent fragmented and duplicative approaches. This integrated approach would facilitate the development of common structures, guidelines, and quality assurance measures for molecular testing. Establishing a central registry through data linkage would enable more comprehensive data aggregation and analysis, support population-level research, and provide a broader perspective on molecular testing outcomes that could drive future innovation. A centralised registry would reduce the risk of health data loss when patient care transitions between different healthcare providers. A barrier to centralisation identified across different discussion groups was that databases are not well maintained and can be difficult to navigate. Development of a centralised website, other online resource or an application that brings together all local information about available genomic tests and clinical trials and mandates the inclusion of clinical trial information would likely be a strong value-add. More regular updates for new available clinical trials and harnessing artificial intelligence to better maintain websites were noted as potential ways to support currency of information. The rapid rate of change within precision oncology is a barrier to clinicians maintaining up to date knowledge on the appropriate test, interpretation of that test and access to potential clinical trials. Access to a centralised resource of experts in precision oncology that can guide decision making and advise on access to treatments and trials, as well as a monitored, broadly available troubleshooting platform and/or decision tree are required to better support clinicians to provide equitable care for all cancer patients. The centralised approach to tissue typing and the National Coronavirus Helpline were identified as existing models that such a resource could be based on. This could also be achieved through a clinical fellow ‘on call’ through the VCCC Alliance and/or MPCCC that could provide personalised support, and regional “champions” skilled in precision oncology to support a given regional area. Molecular Tumour Boards were again identified as an important educational and peer-support tool. Currently, informal networks (such as instant messenger groups) are available to some clinicians, but it was noted that the ad hoc nature of this is perpetuating inequity. In terms of the tests themselves, creating a centralised platform entity that integrates pathology services and outlines guidelines for specific tumour types may address the issue of fragmented laboratories. This platform could serve as a repository of information, including recommended tests, tissue type requirements, and turnaround times. Encouraging collaboration and cooperation among pathology laboratories through the establishment of pathology networks could help coordinate testing efforts and prevent fragmented and duplicative approaches. This integrated approach would facilitate the development of common structures, guidelines, and quality assurance measures for molecular testing. Establishing a central registry through data linkage would enable more comprehensive data aggregation and analysis, support population-level research, and provide a broader perspective on molecular testing outcomes that could drive future innovation. A centralised registry would reduce the risk of health data loss when patient care transitions between different healthcare providers. A barrier to centralisation identified across different discussion groups was that databases are not well maintained and can be difficult to navigate. Development of a centralised website, other online resource or an application that brings together all local information about available genomic tests and clinical trials and mandates the inclusion of clinical trial information would likely be a strong value-add. More regular updates for new available clinical trials and harnessing artificial intelligence to better maintain websites were noted as potential ways to support currency of information. Developing a cost-effective centralised system and infrastructure is vital for equitable access. A collaborative effort between consumers, the health sector, and government should provide clear guidelines and procedures for accessing non-reimbursed molecular testing, ensuring transparency and equal opportunity for all individuals. Collaboration and financial support from government and industry could contribute to education, research, and resources, ultimately benefiting patients and advancing the field of molecular testing. Discussion Outcomes It was noted that health services need to work more closely with universities and research institutions to enhance access to non-reimbursed molecular testing. In the discussion groups (including both clinicians and consumers) there was a plea for there to be increased effort to change the perception of non-reimbursed molecular testing as being purely for research purposes, as outlined recently . Partnerships (including joint funding) between government and industry may be the key to achieving more access to testing as well as off-label therapeutic options for some patients. Advocacy from both consumer groups and the health sector will play a vital role in making non-reimbursed testing a part of standard care and creating a centralised approach to testing, ensuring equal accessibility to resources for different organisations. The discussion participants did note that a push to get CGP reimbursed will require overcoming many challenges. Implementing non-reimbursed molecular testing as the standard of care for all cancer patients raises questions about funding and scalability. It is important to determine who these tests will be funded by and how the results will be incorporated into patient care plans. This would need to link back to workforce education and how changes would be practically disseminated to clinicians. Inconsistent funding models limit the resources allocated to education initiatives. The absence of a single organisation responsible for the implementation of standardised education programs also adds complexity and requires coordination among multiple stakeholders. A collective impact approach is required to reduce duplication and ensure the greatest efficiencies. It was noted that health services need to work more closely with universities and research institutions to enhance access to non-reimbursed molecular testing. In the discussion groups (including both clinicians and consumers) there was a plea for there to be increased effort to change the perception of non-reimbursed molecular testing as being purely for research purposes, as outlined recently . Partnerships (including joint funding) between government and industry may be the key to achieving more access to testing as well as off-label therapeutic options for some patients. Advocacy from both consumer groups and the health sector will play a vital role in making non-reimbursed testing a part of standard care and creating a centralised approach to testing, ensuring equal accessibility to resources for different organisations. The discussion participants did note that a push to get CGP reimbursed will require overcoming many challenges. Implementing non-reimbursed molecular testing as the standard of care for all cancer patients raises questions about funding and scalability. It is important to determine who these tests will be funded by and how the results will be incorporated into patient care plans. This would need to link back to workforce education and how changes would be practically disseminated to clinicians. Inconsistent funding models limit the resources allocated to education initiatives. The absence of a single organisation responsible for the implementation of standardised education programs also adds complexity and requires coordination among multiple stakeholders. A collective impact approach is required to reduce duplication and ensure the greatest efficiencies. Data relating to non-reimbursed molecular testing, clinical utility, and patient outcomes in Victoria are not widely or uniformly collected or accessible for interrogation. Promoting data sharing among all stakeholders poses notable challenges. Presently, the absence of a clear linkage between data inputs and patient outcomes hinders the perceived benefits of data recording. Furthermore, mandating the collection of health data into a centralised repository will necessitate additional resources, thereby requiring a governing body to provide the necessary funding. Discussion Outcomes A significant barrier identified in the Summit discussion groups was the need to ensure data security and privacy when sharing molecular data across different healthcare institutions. Establishing secure data-sharing frameworks, data access controls, and compliance with relevant privacy regulations poses challenges that are important to overcome. Enacting legislation to standardise health data sharing was suggested by some participants as it would likely play a significant role in promoting data interoperability and privacy protections. Legal and policy frameworks would provide guidelines, requirements, and incentives for healthcare organisations to share molecular testing data while maintaining appropriate safeguards. Such legislation could facilitate more secure data exchange, enable research collaborations, and promote greater transparency. It was felt that demonstrating how data sharing leads to improved patient care, informed decision-making, and scientific advancements would encourage stakeholders to actively participate in data collection and data sharing initiatives. Outcome data may also lead to increased investment from pharmaceutical companies and other industry stakeholders who would be more likely to then support the sustainability and viability of data registries over the longer term. Governing bodies would also be able to track the progress of equity policies and frameworks. It was noted by participants that infrastructure such as the Victorian Cancer Registry already exists and could potentially be leveraged to include data pertaining to molecular testing that is already being collected, including through registry trials. Scoping work is required to identify existing resources that can be utilised, with the goal of avoiding duplication of data sets and the associated security risks that may arise. Engaging patients as active participants in data collection processes, such as through patient-reported outcomes, was suggested by the consumer discussion groups. Such information would enrich the datasets, enhance patient-centred research, and promote patient empowerment. Self-reporting may also overcome disparities in health reporting, particularly between regional and metropolitan patients. A significant barrier identified in the Summit discussion groups was the need to ensure data security and privacy when sharing molecular data across different healthcare institutions. Establishing secure data-sharing frameworks, data access controls, and compliance with relevant privacy regulations poses challenges that are important to overcome. Enacting legislation to standardise health data sharing was suggested by some participants as it would likely play a significant role in promoting data interoperability and privacy protections. Legal and policy frameworks would provide guidelines, requirements, and incentives for healthcare organisations to share molecular testing data while maintaining appropriate safeguards. Such legislation could facilitate more secure data exchange, enable research collaborations, and promote greater transparency. It was felt that demonstrating how data sharing leads to improved patient care, informed decision-making, and scientific advancements would encourage stakeholders to actively participate in data collection and data sharing initiatives. Outcome data may also lead to increased investment from pharmaceutical companies and other industry stakeholders who would be more likely to then support the sustainability and viability of data registries over the longer term. Governing bodies would also be able to track the progress of equity policies and frameworks. It was noted by participants that infrastructure such as the Victorian Cancer Registry already exists and could potentially be leveraged to include data pertaining to molecular testing that is already being collected, including through registry trials. Scoping work is required to identify existing resources that can be utilised, with the goal of avoiding duplication of data sets and the associated security risks that may arise. Engaging patients as active participants in data collection processes, such as through patient-reported outcomes, was suggested by the consumer discussion groups. Such information would enrich the datasets, enhance patient-centred research, and promote patient empowerment. Self-reporting may also overcome disparities in health reporting, particularly between regional and metropolitan patients. The VCCC Alliance and MPCCC are developing a collaborative Precision Oncology Roadmap that will provide a series of recommendations to address the current inequity of access to molecular testing based on the discussions held at the Victorian Precision Oncology Summit and subsequent consultation interviews with a range of key national stakeholders. These recommendations will have a focus on the Victorian health sector, in the context of, and complementing broader national programs including (but not limited to) initiatives such as the Cancer Australia National Cancer Genomics Framework, which is part of the Australian Cancer Plan, and the work led by OMICO . Given the high level of engagement of a broad range of health professionals, data experts and industry partners at the Victorian Precision Oncology Summit, the early willingness of key stakeholders to be part of a more in-depth consultation process, and the well-established infrastructure and reach of the VCCC Alliance and MPCCC member and partner-base, Victoria is well placed to pilot initiatives that address key recommendations. Upon successful implementation, such pilots could be expanded nationally to improve equity of access to molecular testing for the benefit of all Australian cancer patients. |
Recognizing African‐American contributions to neurology: The role of Solomon Carter Fuller (1872–1953) in Alzheimer's disease research | 9f7f1b54-f2fa-4c05-8e53-a90dd74cf2de | 7986064 | Pathology[mh] | INTRODUCTION Racial inequality remains a considerable problem in society worldwide. For Black and African‐American individuals, it is a sobering and painful reality that has persisted for almost six decades since the Civil Rights Act was passed in 1964. However, in recent months, a rising crescendo of protest against racial discrimination has emerged in the United States and internationally. It has become a modern‐day social renaissance, not only seeking to bridge the gap between races but also to rediscover and recognize previously neglected Black, Asian and minority ethnic (BAME) influences in society. Medicine is no exception: The role of BAME physicians in pioneering disease research is rich but relatively under‐recognized in modern medical literature. Dr Solomon Carter Fuller, the focus of this article, is widely acknowledged as the first African‐American psychiatrist and, alongside his contemporary Alois Alzheimer, a trailblazer of dementia research. Yet at the time of writing (August 4, 2020), a simple PubMed search for “Solomon Carter Fuller” or “Solomon Fuller” in the “Title” field yielded a mere 2 results, compared with 41 when replaced with “Alois Alzheimer.” By delving into his life, experiences, and contributions to neurology, this brief biography seeks to unravel the obscurity that has veiled the accomplishments of Solomon Carter Fuller in Alzheimer's disease research, and in doing so aims to mark a step in the broader recognition of the influences of physicians of ethnic minority background in medical advancement. METHODS A literature search of the following databases was performed to identify pertinent articles relating to the life and/or work of Solomon Carter Fuller: PubMed, Cochrane Database of Systematic Reviews, MEDLINE, EMBASE, and Google Scholar. All databases were searched from their date of inception to August 2020, with selected references exported and formatted using EndNote X9. Specific search terms and Boolean search modifiers included “Solomon Carter Fuller” and “Solomon Fuller.” No restrictions on language or publication date were placed. BIOGRAPHY Dr Fuller (figure 1) was born on August 11, 1872 in Monrovia, Liberia; his paternal grandfather had emigrated from the United States to Liberia upon buying his and his wife's freedom from slavery. His father, a coffee planter and government official, oversaw Fuller's education on the plantation during his formative years, although it was his maternal grandparents, both medical missionaries in Liberia, who were thought to have influenced Solomon's interest in medicine. At age 17, Fuller left Liberia for the United States and attended Livingstone College in North Carolina, a historically Black private institution founded to support higher education among African Americans. Graduating with a BA degree in 1893, he subsequently began his medical career as a student at Long Island College Hospital in Brooklyn, New York. Just 6 years earlier, the college had admitted Dr Susan Smith McKinney, the third African‐American woman to hold a medical degree in the United States and the first in the state of New York for post‐graduate study. Dr Fuller completed his medical education at Boston University with an MD in 1897 at 25 years of age. Here too, he joined illustrious company. In 1864, preceding Dr Fuller's birth, Dr Rebecca Lee Crumpler graduated from New England Female College, an antecedent component of Boston University, to become the first African‐American physician in the United States. Dr Fuller went on to complete a 2‐year internship at Westborough State Hospital, Massachusetts. His interest in the study of neurological and psychiatric disease, and their neuropathological basis, led him to conduct post‐mortem examinations voluntarily during this period. Through doing so, Fuller was appointed Hospital Pathologist and Instructor of Pathology at Boston University following his internship in 1899. Research in Context Systematic review : The author reviewed existing literature using traditional (eg, PubMed and MedLine) sources and meeting abstracts. Primary sources, published by Dr Fuller, were identified and reviewed where available and digitized. Several key secondary sources were used to support interpretation of Dr Fuller's published material and are appropriately cited. Despite this, detailed knowledge of the life and works of Solomon Carter Fuller remains limited and merits recognition given his contributions to neurology. Interpretation : In being arguably the most concise yet holistic review of Dr Fuller's life and academic achievements to date, this article highlights the need for further medico‐historical research into Dr Fuller's life and research in Germany. Future direction : This article seeks to provide a basis for greater recognition of physicians of BAME background in medicine, and prompt reflection of racial discrimination faced by these physicians. Three years later, at age 30, Fuller joined Professor Edward Dunham at Belleview Medical College, New York, with the aim of enhancing his technical histological skills and obtaining further post‐mortem examination exposure. Dunham was a leading pathologist in the United States at the time, having spent 1 year working in Robert Koch's laboratory in Berlin, where he established the use of sulfuric acid as a test reagent in the identification of Vibrio cholerae , a test known today as the "Cholera‐red" or indole reaction. Dr Fuller's important role in Alzheimer's disease research began to unfold in 1904, when he was one of five foreign laboratory research assistants selected by Alois Alzheimer to work at the newly created Royal Psychiatric Hospital at the University of Munich, then headed by renowned psychiatrist Emil Kraepelin. Information regarding Dr Fuller's life and work in Germany is limited and may reflect the general air of invisibility and anonymity that research assistants worked within at the turn of the 20th century. Further medico‐historical research is warranted to fill this gap in our knowledge of Fuller's life, not least because it covers a defining period in his career. Such information may shed light on Fuller's interactions with Alzheimer and his reception and treatment being an African‐American academic working in Germany. Nevertheless, in addition to working with Alzheimer, Dr Fuller also worked to broaden his grasp of microbiology at the university's Institute of Pathology with Professors Otto Bollinger and Hans Schmaus. The former had researched extensively in veterinary medicine and the latter was more focused on spinal cord pathology. The secondment in Germany was short‐lived but impactful; Fuller returned to Westborough Hospital in 1905, continued his role as neuropathologist, and founded and edited the "Westborough State Hospital Papers," a journal that published local research activity. His interest in the eponymously named Alzheimer's disease, coined by Kraepelin in 1910, led him to write extensively and become a leading authority on the subject. In 1919, at age 47 years, Fuller resigned from Westborough Hospital and dedicated his time to medical education at Boston University. He became associate professor of neuropathology that year and 2 years later associate professor of neurology. Despite holding these positions and being the only African American on the faculty at the time, Fuller found himself on the receiving end of racial discrimination. He was paid less than his fellow white professors and not formally acknowledged on the university's payroll. From 1928 to 1933, he acted as chair of the Department of Neurology yet was not actually afforded the title. In fact, his retirement in 1933, at age 61, came after a junior white assistant professor was promoted to full professorship and appointed the official departmental chair, a move Fuller felt may not have occurred had he been white. In his own words, Fuller commented, “With the sort of work that I have done, I might have gone farther and reached a higher plane had it not been for the colour of my skin.” Upon his retirement, Fuller was given the title of emeritus professor of neurology at Boston University, although he continued to practice neurology and psychiatry in Massachusetts and for a period in Pennsylvania. He began to suffer increasingly from diabetes, such that by 1944 he had lost his eyesight completely. At age 81 in 1953, Dr Fuller died of diabetes and gastrointestinal malignancy. Shortly before his passing, he was visited by the neurologist Dr James Ayer who remarked, “though blind, his memory was excellent, his speech flawless, his interests alive. He knew he had not long to live, but accepted the fact in his usual, philosophical manner, like the perfect gentleman he was." Systematic review : The author reviewed existing literature using traditional (eg, PubMed and MedLine) sources and meeting abstracts. Primary sources, published by Dr Fuller, were identified and reviewed where available and digitized. Several key secondary sources were used to support interpretation of Dr Fuller's published material and are appropriately cited. Despite this, detailed knowledge of the life and works of Solomon Carter Fuller remains limited and merits recognition given his contributions to neurology. Interpretation : In being arguably the most concise yet holistic review of Dr Fuller's life and academic achievements to date, this article highlights the need for further medico‐historical research into Dr Fuller's life and research in Germany. Future direction : This article seeks to provide a basis for greater recognition of physicians of BAME background in medicine, and prompt reflection of racial discrimination faced by these physicians. CONTRIBUTION TO NEUROLOGY Dr Fuller's contribution to neurology and, more specifically, Alzheimer's disease, is understated and more impressive given the odds he faced as an African American in what was then a primarily white male–dominated profession. In 1907, following his return from Munich 2 years earlier, Fuller published a case series describing the neuropathological features on autopsy of patients diagnosed with conditions including "dementia paralytica", "dementia senilis," and chronic alcoholism. In it he reported abnormal neuronal appearances and the presence of neurofibrils in cases of "dementia senilis" and "dementia paralytica", while also recognizing the influence of Kraepelin and Alzheimer in furthering his career in Germany and their input in dementia research to date. Four years later, in one of the first studies to appraise the role of senile plaques in aging, Fuller supported Alzheimer's observation in refuting the role of arteriosclerosis in plaque formation and questioned the importance of plaques and neurofibrillary pathology as hallmarks of Alzheimer's disease. , , Dr Fuller's seminal piece came in 1912 when he published, in two parts, the first comprehensive review of Alzheimer's disease at the time. As well as reviewing 11 known cases and translating Alzheimer's original case in English for the first time, he also described the ninth recorded case of the disease. , , , Dr Fuller's patient was a 56‐year‐old man with a 2‐year history of memory impairment, receptive dysphasia, and apraxia. Autopsy revealed “regional cerebral atrophies,” a degree of large vessel arteriosclerosis, extensive plaque presence, and intracellular "Alzheimer degeneration," comprising a “tangled mass of thick, darkly staining snarls and whirls of the intracellular fibrils,” reflective of neurofibrillary tangles. , In the same year, Samuel Fuller and his colleague Henry Klopp described a case of suspected Alzheimer's disease that did not exhibit intracellular neurofibrils on autopsy, yet bore senile plaques and clinical similarities to previously diagnosed cases. , In recognizing the case as “an example of the group now designated as Alzheimer's disease,” Fuller and Klopp unequivocally accepted Alzheimer's disease as a clinical entity but stopped short of regarding it distinct from senile dementia altogether. , To this end they judged Alzheimer's disease to be an atypical type of senile dementia. Fuller's academic interests extended beyond neurology in that he published broadly on subjects ranging from pernicious anemia in the "insane" and the effects of belladonna on animal tissue, to melancholia and "manic‐depressive insanity." In a non‐research capacity, Fuller was critical in establishing a foothold for African‐American physicians in psychiatry through his selection and training of three young trainees at Tuskegee Veterans Administration Hospital, Alabama. DISCUSSION Beyond an Honorary Doctor of Science degree awarded by Livingstone College in 1943, Solomon Carter Fuller's accomplishments in Alzheimer's disease were relatively undervalued. In 1971, the Black Psychiatrists of America presented a portrait of Dr Fuller to the American Psychiatric Association, which recognized him as the nation's first Black psychiatrist. Dr Fuller's achievements were further celebrated in 1973 in a 1‐day conference at Boston University. The following year, the Solomon Carter Fuller Mental Health Center was established via Massachusetts legislation to provide outpatient psychiatric services and facilitate research and education. Unfortunately, despite Fuller's work and subsequent post‐humous recognition of African‐American influences in neurology and psychiatry, there remains a racial disparity in the provision of mental health care in the United States. African Americans share a frequency of mental illness similar to that of their White counterparts, yet have reduced access to mental health services and appropriate treatments. , , Recognition of this inequality is particularly crucial in the present day as the Black and African‐American community finds itself disproportionately affected by the coronavirus disease 2019 (COVID‐19) pandemic. Perceived racial discrimination is associated with adverse mental health outcomes and has been intensified by the pandemic. Coupled with suspected racial bias in disease testing, this raises concerns of excess physical morbidity and mortality as well as an increased risk of mental illness within this community, in turn fueling increased demand for services. Dr Fuller's experiences closely mirror those of other visionary Black scientists who emerged in post‐emancipation America and faced racial discrimination. Edward Bouchet is a noteworthy example who, at age 24, was awarded a doctorate (PhD) by Yale University for research on geometric optics. In doing so he became the first African American to receive a PhD in any field in the United States, and only the sixth person of any race to receive a PhD in physics from an American university. Despite his credentials, Bouchet's academic career was effectively brought to a premature end by a reluctance within higher education at the time to appoint Black faculty. Bouchet was unable to secure a university teaching or research position and spent most of his remaining career teaching at a high school established for African‐American students. Fuller fared comparatively better in this regard, albeit receiving his medical degree just over 20 years after Bouchet earned his doctorate. Of interest, the extent of racial discrimination faced by both individuals reflects the changing attitude toward African‐American involvement in academia within the space of almost 50 years. At the zenith of their careers, whereas Bouchet was deemed unappointable to university faculty positions, Fuller was able to reach the height of Departmental Chair of Neurology at Boston University, but found himself acting as such incognito, without formal recognition, and ultimately superseded by a junior white colleague. CONCLUSION The latter half of the 19th century was witness to a period of enlightenment in neurology that was propelled by celebrated figures including William Gowers, John Hughlings Jackson, and Alois Alzheimer. Dr Fuller is another such figure but whose name and achievements have largely been ignored by the annals of history. Raised from humble beginnings in Liberia, Fuller overcame overwhelming odds in an environment hostile to African‐American progress to become one of a select few to pioneer dementia research. His translation of Alzheimer's work, combined with his own observations, arguably enabled the concept of Alzheimer's disease as a novel clinical entity to spread throughout the English‐speaking world. However, detail of Fuller's experiences in Germany are limited and merit further research. In addition to barriers to academic progression, they appear to suggest a reluctancy within society at the time to acknowledge African‐American excellence in medicine. Solomon Carter Fuller was a father and husband, a keen gardener, and master bookbinder. For the scientific community, he was an outstanding physician who excelled in a country where his grandfather had been enslaved and obtained his freedom. His career trajectory may have been greater had he not been African American, yet this did not hold him back from becoming a pioneer in Alzheimer's disease research. |
Assessing the reliability of pediatric emergency medicine billing code assignment for future consideration as a proxy workload measure | 9f33909e-6c0b-47a3-ba4f-edc04f0e1bbb | 10456198 | Pediatrics[mh] | Crowding is a common problem in pediatric emergency departments (PEDs) and can negatively impact patient health outcomes and clinicians’ wellness . Chan et al. attributes PED crowding in part to inefficiencies in the patient flow—namely, the input, throughput, and output factors . The input and output factors, defined as the number of incoming patients and disposition respectively, are generally not under the control of the PED. However, the rate at which patients are treated, known as the throughput, can be improved by optimizing the allocation of resources such as space and staffing assignments . This can be achieved using a proxy measure to quantify PED physician workload, allowing for prediction of resource needs to guide allocation and ensure efficient PED throughput. To date, there has been two proposed measures to estimate PED physician workload; however, neither are validated for workload estimation. The first is the time needed to treat, as measured by the direct interaction time spent between the PED physician and the patient . However, workload is determined by a multitude of different factors in addition to the time needed to treat, including mental demand, physical demand, and psychological stress . Therefore, time needed to treat by itself cannot adequately represent PED physician workload. Furthermore, it is generally not a conventionally collected variable in the PED and is labour intensive to record, making it largely unavailable for academic and administrative purposes. The second measure, sometimes perceived as a surrogate for physician workload is the Pediatric Canadian Triage and Acuity Score (PaedCTAS). This is a triaging tool that evaluates the urgency of the patient’s needs based on their clinical presentation to prioritize access to care in the PED . While the PaedCTAS has been shown to correlate with PED disposition , it was not designed to measure workload, nor has it been evaluated for such purposes. Of note, there is evidence to suggest that using the CTAS (adult equivalent of PaedCTAS) alone is not sufficient for determining physician workload in the general emergency department (ED) setting given the large variability in their workload measure at each triage level . This brings into question the validity of using PaedCTAS, a derivative of the original CTAS triaging tool, to be used as a measure of physician workload. To address the current lack of PED physician workload proxy measure, we propose evaluating billing codes, which are assigned by physicians for compensation either for direct remuneration or shadow estimation of workload and administrative purposes, after each patient encounter based on their impression of the amount of work required to treat the patient. Throughout Canada, many EDs use either a 2 or 3-level billing code system, with greater levels indicating more complexity and work required to manage the patient encounter; some systems also include modifiers which account for other factors such as time of the day, patient age, and procedures performed . With the 3-level system, level 1 is assigned for treatment involving a single organ system or a simple condition, level 2 for conditions which necessitate treatment of at least 2 organ systems with a need for reassessment during the visit, and level 3 for complex conditions requiring prolonged observation and therapy with multiple assessments . In British Columbia (BC) alone, billing code data is used to estimate workload in the fee-for-service setting to allocate approximately $75 million of funding to emergency physicians . Given that billing codes are readily reported and the reliance on billing codes to measure physician workload for remuneration purposes, this variable holds potential to be a proxy measure of PED workload. To assess if physician assigned billing codes can approximate physician workload, we must evaluate the degree of reliability in which PED physicians are assigning these billing codes. Inter-rater agreement of billing codes has been evaluated in other medical specialties and reliability has been found to vary between them . In this study, we aim to assess how reliably PED physicians bill when compared to a billing expert who is also the provincial auditor. In addition, we aim to identify which factors are associated with inter-rater reliability. Study objective and design We conducted a retrospective cross-sectional study at BC Children’s Hospital (BCCH) ED to evaluate the reliability of billing codes assigned by PED physicians compared to the billing code assigned by a billing auditor, who is one of the listed authors of the research group (G.M.) and does not work at the BCCH. The billing auditor selected is an emergency physician and the Chair for the Fee-For-Service Committee within the Section of Emergency Medicine at Doctors of BC, the association representing physicians in BC. Given the billing auditor’s clinical and administrative expertise in emergency medicine, their interpretation of billing code was used as the criterion standard. The primary objective for this study was to evaluate how consistently billing codes are assigned by determining the inter-rater reliability between PED physician assigned billing codes and billing auditor assigned billing codes. Our secondary objective was to identify visit characteristics associated with inter-rater reliability. Study setting and population BCCH ED is a quaternary care referral centre located in Vancouver, BC with approximately 50,000 annual visits . We collected data from a random sample of visits from children aged up to 18 years who visited the BCCH ED between January 1 st , 2018 to December 31 st , 2018 inclusive, provided that the patient did not leave without being seen by a physician and that the physician assigned a billing code to their visit. We used health records provided by the Provincial Health Service Authority Data Analytics, Reporting and Evaluation (PHSA DARE) Office. A timeframe before the COVID-19 pandemic was studied to ensure physician billing practices were unaltered by pandemic precautions such as extra PPE and disease screening. The sample of visits was evenly distributed between months of the year and with representation of all 5 levels of the PaedCTAS scale, with propensity for PaedCTAS 3 and 4 as they generally make up the majority of all PED visits . While our PED is staffed with pediatric emergency medicine physicians, general emergency physicians, and nurse practitioners, our study only included visits which were managed or supervised (when a trainee is involved) by a pediatric or general emergency medicine physician, as other care providers do not assign billing codes. Physicians at BCCH ED are paid on an alternate payment plan and therefore utilize the shadow billing system, whereby billing codes are assigned not for remuneration, but for both individual physician performance monitoring and group contract negotiation. Ethics approval was obtained from the University of British Columbia and BC Women and Children’s Hospital Research Ethics Board and the requirement for informed consent was waived by the two ethical governing bodies. Outcome measures The inter-rater reliability between PED physicians and the billing auditor was evaluated using percentage agreement and Gwet’s AC 2 with 95% confidence intervals (CI) as our primary outcome measure. As the secondary outcome measure, we calculated the percentage agreement and AC 2 values stratified by visit characteristics including triage categories (PaedCATS1-5), patient age (<1y, 1-5y, >5y), whether clinical trainees were involved, time of disposition (day 0800-1800h, evening 1800-2300h, and night 2300-0800h), and disposition (discharged vs. admitted). Study procedure From the chart review, we extracted the billing code assigned by the PED physician and the clinical variables needed for the billing auditor to assign a billing code. Clinical variable selection was informed by consultations with clinicians and published literature around the subject of physician workload. These variables include those which were found to be strong predictors for workload intensity such as the PaedCTAS score, presentations or comorbidities related to mental health, requirement for ambulance, laboratory and imaging ordered, number of subspecialty consultations, procedures performed, need for sedation, trainee involvement, language barrier, disposition, and length of stay . As well, information which can inform the billing auditor of the clinical context were also collected, such as the patient demographic, chief complaint, the history of presenting illness, physical exam findings, vital signs, and any other progress notes or text relevant to the patient visit. The clinical variables were collected by two trained research students onto the Research Electronic Data Capture platform, a BCCH Research Institute licensed data capture software. The authors had access to patient identifiers, such as the personal healthcare number, during the data extraction which were not collected. To ensure inter-extractor reliability between the students, data extraction training was carried out by a PED physician. Both students separately extracted the data from 15 charts (10% of total sample size) and compared their output for any discrepancies, which were resolved by consensus. This process was repeated until the extracted data between the two students matched for all 15 charts at which point the remaining charts were divided and the data was extracted by each student. In total, 30 charts were extracted in tandem. The data collection was conducted between August to October of 2021. Following data collection, the billing auditor was given the extracted clinical data to assign a billing code. The billing auditor was blinded to PED physicians’ billing codes. Analysis approach We report descriptive statistics to summarize our study sample, using proportions with 95% CI as appropriate. The percentage agreement and Gwet’s AC 2 statistics were used as the measure of reliability in the PED physicians’ billing practices. The AC 2 statistic was chosen for its resiliency against the effects of trait prevalence, where high chance agreement can paradoxically result in low chance-corrected agreement despite relatively higher percentage agreement . The Landis and Koch criterion was used to interpret the AC 2 values which categorizes the chance-corrected agreement statistics as follows: 0–0.20 slight agreement, 0.21–0.40 fair, 0.41–0.60 moderate, 0.61–0.80 substantial, and >0.80 excellent agreement . AC 2 was calculated with linear weighting. We completed univariate logistic regression models to determine the impact of covariates of interest on inter-rater reliability, then adjusted potential confounders. Analyses were performed using R statistical software. Given that there are two raters (PED physicians and the billing auditor) and three categories (billing codes 1, 2, 3), to estimate AC2 within a margin of 0.15 with 95% confidence, a sample size of 90 was required . We added a margin to ensure that we obtain 150 chart visits that meet all our eligibility criteria and requested 300 randomly selected charts meeting specified distribution over time and acuity from the PHSA DARE Office. Upon receipt, we used the Microsoft Excel’s random number generator function and reviewed charts in a randomized sequence to review for eligibility and extract data until the sample size of 150 was met. We conducted a retrospective cross-sectional study at BC Children’s Hospital (BCCH) ED to evaluate the reliability of billing codes assigned by PED physicians compared to the billing code assigned by a billing auditor, who is one of the listed authors of the research group (G.M.) and does not work at the BCCH. The billing auditor selected is an emergency physician and the Chair for the Fee-For-Service Committee within the Section of Emergency Medicine at Doctors of BC, the association representing physicians in BC. Given the billing auditor’s clinical and administrative expertise in emergency medicine, their interpretation of billing code was used as the criterion standard. The primary objective for this study was to evaluate how consistently billing codes are assigned by determining the inter-rater reliability between PED physician assigned billing codes and billing auditor assigned billing codes. Our secondary objective was to identify visit characteristics associated with inter-rater reliability. BCCH ED is a quaternary care referral centre located in Vancouver, BC with approximately 50,000 annual visits . We collected data from a random sample of visits from children aged up to 18 years who visited the BCCH ED between January 1 st , 2018 to December 31 st , 2018 inclusive, provided that the patient did not leave without being seen by a physician and that the physician assigned a billing code to their visit. We used health records provided by the Provincial Health Service Authority Data Analytics, Reporting and Evaluation (PHSA DARE) Office. A timeframe before the COVID-19 pandemic was studied to ensure physician billing practices were unaltered by pandemic precautions such as extra PPE and disease screening. The sample of visits was evenly distributed between months of the year and with representation of all 5 levels of the PaedCTAS scale, with propensity for PaedCTAS 3 and 4 as they generally make up the majority of all PED visits . While our PED is staffed with pediatric emergency medicine physicians, general emergency physicians, and nurse practitioners, our study only included visits which were managed or supervised (when a trainee is involved) by a pediatric or general emergency medicine physician, as other care providers do not assign billing codes. Physicians at BCCH ED are paid on an alternate payment plan and therefore utilize the shadow billing system, whereby billing codes are assigned not for remuneration, but for both individual physician performance monitoring and group contract negotiation. Ethics approval was obtained from the University of British Columbia and BC Women and Children’s Hospital Research Ethics Board and the requirement for informed consent was waived by the two ethical governing bodies. The inter-rater reliability between PED physicians and the billing auditor was evaluated using percentage agreement and Gwet’s AC 2 with 95% confidence intervals (CI) as our primary outcome measure. As the secondary outcome measure, we calculated the percentage agreement and AC 2 values stratified by visit characteristics including triage categories (PaedCATS1-5), patient age (<1y, 1-5y, >5y), whether clinical trainees were involved, time of disposition (day 0800-1800h, evening 1800-2300h, and night 2300-0800h), and disposition (discharged vs. admitted). From the chart review, we extracted the billing code assigned by the PED physician and the clinical variables needed for the billing auditor to assign a billing code. Clinical variable selection was informed by consultations with clinicians and published literature around the subject of physician workload. These variables include those which were found to be strong predictors for workload intensity such as the PaedCTAS score, presentations or comorbidities related to mental health, requirement for ambulance, laboratory and imaging ordered, number of subspecialty consultations, procedures performed, need for sedation, trainee involvement, language barrier, disposition, and length of stay . As well, information which can inform the billing auditor of the clinical context were also collected, such as the patient demographic, chief complaint, the history of presenting illness, physical exam findings, vital signs, and any other progress notes or text relevant to the patient visit. The clinical variables were collected by two trained research students onto the Research Electronic Data Capture platform, a BCCH Research Institute licensed data capture software. The authors had access to patient identifiers, such as the personal healthcare number, during the data extraction which were not collected. To ensure inter-extractor reliability between the students, data extraction training was carried out by a PED physician. Both students separately extracted the data from 15 charts (10% of total sample size) and compared their output for any discrepancies, which were resolved by consensus. This process was repeated until the extracted data between the two students matched for all 15 charts at which point the remaining charts were divided and the data was extracted by each student. In total, 30 charts were extracted in tandem. The data collection was conducted between August to October of 2021. Following data collection, the billing auditor was given the extracted clinical data to assign a billing code. The billing auditor was blinded to PED physicians’ billing codes. We report descriptive statistics to summarize our study sample, using proportions with 95% CI as appropriate. The percentage agreement and Gwet’s AC 2 statistics were used as the measure of reliability in the PED physicians’ billing practices. The AC 2 statistic was chosen for its resiliency against the effects of trait prevalence, where high chance agreement can paradoxically result in low chance-corrected agreement despite relatively higher percentage agreement . The Landis and Koch criterion was used to interpret the AC 2 values which categorizes the chance-corrected agreement statistics as follows: 0–0.20 slight agreement, 0.21–0.40 fair, 0.41–0.60 moderate, 0.61–0.80 substantial, and >0.80 excellent agreement . AC 2 was calculated with linear weighting. We completed univariate logistic regression models to determine the impact of covariates of interest on inter-rater reliability, then adjusted potential confounders. Analyses were performed using R statistical software. Given that there are two raters (PED physicians and the billing auditor) and three categories (billing codes 1, 2, 3), to estimate AC2 within a margin of 0.15 with 95% confidence, a sample size of 90 was required . We added a margin to ensure that we obtain 150 chart visits that meet all our eligibility criteria and requested 300 randomly selected charts meeting specified distribution over time and acuity from the PHSA DARE Office. Upon receipt, we used the Microsoft Excel’s random number generator function and reviewed charts in a randomized sequence to review for eligibility and extract data until the sample size of 150 was met. We requested 300 randomized patient records from the PHSA DARE Office, and reached the sample size requirement of 150 after reviewing 187 charts . The distribution of subgroups across our sample is outlined in . Overall, the percent agreement between PED physician and billing auditor was 68.7%. There was substantial inter-rater reliability (AC2: 0.72 95% CI: 0.64, 0.8). Among the 47 (31.3%) instances where the PED physician and the billing auditor disagreed, the PED physician assigned a lower billing code than the billing auditor 27 times (18%). shows the inter-rater reliability indices for the overall sample size and stratified by visit characteristics. The inter-rater reliability is highest in PaedCTAS 3 (AC 2 : 0.84 95% CI: 0.6, 0.9), age <1y (AC 2 : 0.81 95% CI: 0.7, 0.95), and clinical trainee involvement (AC 2 : 0.79 95% CI: 0.7, 0.9) subgroups. Other subgroups display wide and overlapping CIs and no pattern of changes in the inter-rater reliability index. shows the adjusted and unadjusted regressions exploring the association between individual visit characteristics and inter-rater reliability. After controlling for all other subgroups in the adjusted model, clinical trainee involvement is the only subgroup showing significant association with increased billing code assignment reliability (adjusted OR: 2.2 95% CI: 1.02, 4.9), when compared to visits managed only by the staff PED physician. Interpretation Our study found substantial inter-rater reliability in billing code assignment between PED physicians and the billing auditor, which suggests billing codes are accurately assigned. This is an important step in establishing the potential for billing codes to serve as a proxy measure of PED workload. While several subgroups showed association with higher inter-rater reliability, only clinical trainee involvement was found to be associated with significantly higher inter-rater reliability, and this significance persisted when controlling for PaedCTAS, patient age, time of day, and disposition. Several studies evaluating billing practices showed that the amount of experience and time allocated to teaching physicians about billing is associated with increased billing accuracy, such that staff and senior residents tend to have greater levels of comfort and knowledge in assigning billing codes compared to junior residents . These findings are rather intuitive, as more exposure and education regarding a certain topic understandably increases one’s competency in practice. Therefore, given that billing codes in BCCH ED are only assigned by staff physicians, our results finding high inter-rater reliability with the billing auditor is expected. A systematic review analyzing current billing practices to recommend methods of improving pediatric billing accuracy supports this notion, stating that more billing education is a key component to improved accuracy . Other studies that evaluate billing practices, which yielded findings of lower billing accuracy, include residents or recent residency graduates to assess their quality of education, readiness, and the financial impact of inaccurate billing, rather than assessing billing reliability by experienced staff . Our results also show significantly increased odds of higher inter-rater agreement when clinical trainees are involved, which may be explained by a few factors. First, PED clinical trainees’ documentation has been reported to be more complete when compared to staff physicians which may have allowed the billing auditor to have a better context and more accuracy in assigning their billing code, increasing the probability of agreement. Second, clinical trainee education and participation is at times intentionally set aside to prioritize the efficiency of patient flow when the ED capacity is stressed . In these cases, it may be that trainees are more likely be involved in simpler cases which require less interpretation to code. This appears to be reflected in our samples as high acuity cases, which are more likely to be complex, involve fewer trainees than low acuity cases. We acknowledge that the immense complexity of estimating PED workload cannot be entirely addressed using a 3-level billing system. However, until a more comprehensive PED workload measure is developed and validated, it may be the most appropriate and accessible variable for physician workload estimation given the following reasons. First, billing codes are by design meant to estimate the complexity of clinical decision making and treatment, which is demonstrated in their utility as the variable used to allocate millions of dollars to compensate fee-for-service physicians. Second, the 3-level billing code system is widely used in Canada, as it is implemented in BC, Ontario, Prince Edward Island, and the Northwest Territories . Furthermore, compared to the 2-level system used in other provinces such as Quebec, Newfoundland and Labrador, Saskatchewan, and Manitoba , the 3-level system may offer better stratification in estimating the PED workload. Third, a 3-level billing system can be simple to learn and assign in comparison to other existing billing systems, which may be contributing to its high reliability in use by PED physicians. More complex billing systems exist in other specialties which is based off a diverse set of diagnostic or procedural work, such as the International Classification of Diseases or Current Procedural Terminology in the United States, or provincial payment schedules in Canada . These contain thousands of billing codes plus modifiers, which are constantly changing and can often be challenging for physicians to use . Limitations Our study results should be interpreted within its limitations. First, the billing auditor’s billing code assignment depends on the quality of the physician’s documentation. Within our sample, the billing auditor flagged seven of the 150 patient records, indicating that there is poor documentation. In two of the seven flagged records, the alternative billing code they would have assigned, had the documentation contained the required details, matched with the physician’s billing code assignment. Therefore, it is possible that improving physician documentation will likely increase the inter-rater reliability, and that our reported level of agreement, based on retrospective documentation, may be conservative. Secondly, further research with additional sample sizes in various visit characteristics is needed to explore the potential association between them and PED physician billing code reliability. Our study does not assess whether the association found between trainee involvement and improved billing code accuracy is intrinsic to the trainee’s charting or if the association can be explained by other variables. Finally, we used shadow billing codes from PED physicians and their compensation is not dependent on billing pattern. Without a direct financial incentive, concerns may arise about the accuracy of shadow billing data . However, a study in Alberta showed that shadow billing does not affect the accuracy at which they are submitted by specialists including pediatricians in urban, acute care hospitals compared to fee-for-service billing . This suggests that the use of shadow billing data in our study is unlikely to affect the validity of our results. Our study found substantial inter-rater reliability in billing code assignment between PED physicians and the billing auditor, which suggests billing codes are accurately assigned. This is an important step in establishing the potential for billing codes to serve as a proxy measure of PED workload. While several subgroups showed association with higher inter-rater reliability, only clinical trainee involvement was found to be associated with significantly higher inter-rater reliability, and this significance persisted when controlling for PaedCTAS, patient age, time of day, and disposition. Several studies evaluating billing practices showed that the amount of experience and time allocated to teaching physicians about billing is associated with increased billing accuracy, such that staff and senior residents tend to have greater levels of comfort and knowledge in assigning billing codes compared to junior residents . These findings are rather intuitive, as more exposure and education regarding a certain topic understandably increases one’s competency in practice. Therefore, given that billing codes in BCCH ED are only assigned by staff physicians, our results finding high inter-rater reliability with the billing auditor is expected. A systematic review analyzing current billing practices to recommend methods of improving pediatric billing accuracy supports this notion, stating that more billing education is a key component to improved accuracy . Other studies that evaluate billing practices, which yielded findings of lower billing accuracy, include residents or recent residency graduates to assess their quality of education, readiness, and the financial impact of inaccurate billing, rather than assessing billing reliability by experienced staff . Our results also show significantly increased odds of higher inter-rater agreement when clinical trainees are involved, which may be explained by a few factors. First, PED clinical trainees’ documentation has been reported to be more complete when compared to staff physicians which may have allowed the billing auditor to have a better context and more accuracy in assigning their billing code, increasing the probability of agreement. Second, clinical trainee education and participation is at times intentionally set aside to prioritize the efficiency of patient flow when the ED capacity is stressed . In these cases, it may be that trainees are more likely be involved in simpler cases which require less interpretation to code. This appears to be reflected in our samples as high acuity cases, which are more likely to be complex, involve fewer trainees than low acuity cases. We acknowledge that the immense complexity of estimating PED workload cannot be entirely addressed using a 3-level billing system. However, until a more comprehensive PED workload measure is developed and validated, it may be the most appropriate and accessible variable for physician workload estimation given the following reasons. First, billing codes are by design meant to estimate the complexity of clinical decision making and treatment, which is demonstrated in their utility as the variable used to allocate millions of dollars to compensate fee-for-service physicians. Second, the 3-level billing code system is widely used in Canada, as it is implemented in BC, Ontario, Prince Edward Island, and the Northwest Territories . Furthermore, compared to the 2-level system used in other provinces such as Quebec, Newfoundland and Labrador, Saskatchewan, and Manitoba , the 3-level system may offer better stratification in estimating the PED workload. Third, a 3-level billing system can be simple to learn and assign in comparison to other existing billing systems, which may be contributing to its high reliability in use by PED physicians. More complex billing systems exist in other specialties which is based off a diverse set of diagnostic or procedural work, such as the International Classification of Diseases or Current Procedural Terminology in the United States, or provincial payment schedules in Canada . These contain thousands of billing codes plus modifiers, which are constantly changing and can often be challenging for physicians to use . Our study results should be interpreted within its limitations. First, the billing auditor’s billing code assignment depends on the quality of the physician’s documentation. Within our sample, the billing auditor flagged seven of the 150 patient records, indicating that there is poor documentation. In two of the seven flagged records, the alternative billing code they would have assigned, had the documentation contained the required details, matched with the physician’s billing code assignment. Therefore, it is possible that improving physician documentation will likely increase the inter-rater reliability, and that our reported level of agreement, based on retrospective documentation, may be conservative. Secondly, further research with additional sample sizes in various visit characteristics is needed to explore the potential association between them and PED physician billing code reliability. Our study does not assess whether the association found between trainee involvement and improved billing code accuracy is intrinsic to the trainee’s charting or if the association can be explained by other variables. Finally, we used shadow billing codes from PED physicians and their compensation is not dependent on billing pattern. Without a direct financial incentive, concerns may arise about the accuracy of shadow billing data . However, a study in Alberta showed that shadow billing does not affect the accuracy at which they are submitted by specialists including pediatricians in urban, acute care hospitals compared to fee-for-service billing . This suggests that the use of shadow billing data in our study is unlikely to affect the validity of our results. In this study, we showed that 3-level billing codes are accurately assigned by PED physicians. This provides a positive first step in the validation of billing codes as a proxy measure of PED workload. With a validated proxy measure, opportunities exist for better optimization of PED resource allocation via workload prediction, which can ultimately improve the throughput. |
Interprofessional contact with conventional healthcare providers in oncology: a survey among complementary medicine practitioners | bd01aab7-3a5b-483b-87e3-2eec0d36b213 | 11282773 | Internal Medicine[mh] | Approximately half of all patients with cancer use complementary medicine (CM) . CM is a healthcare approach that is being used alongside conventional cancer treatment and includes many therapies, such as massage, acupuncture and nutritional supplements . CM can benefit the quality of life of patients with cancer, for instance acupuncture can be used for cancer pain management and mindfulness-based interventions for depression and anxiety during cancer treatment . However, CM can also pose a risk to patients with cancer, for example when herbs and supplements interact with chemotherapy . Given the potential benefits and risks for patients with cancer that use CM, communication between individuals providing CM (CM practitioners) and conventional healthcare providers (HCPs) is important for monitoring the health and safety of patients with cancer. However, there seem to be several barriers to such interprofessional contact. Generally, CM practitioners are located outside the hospital and often work independently of conventional HCPs such as oncologists and nurses. Other barriers described in two previous studies were unfamiliarity with each other’s medical system, language barriers due to distinct terminology , medical dominance of conventional HCPs and the lack of role clarity . There are no guidelines available on interprofessional communication about CM between CM practitioners and conventional HCPs. A previous study showed that physicians and CM practitioners regarded communication with each other as important, although only 7% of physicians and 18% of CM practitioners reported previously having such interprofessional contact . Importantly, only one previous study was conducted in an oncology setting and assessed actions to improve communication between CM practitioners and conventional HCPs in oncology, such as being trained in the other field, using common medical terminology and being located in the same practice . To the best of our knowledge, no further studies have been conducted on contact between CM practitioners and conventional HCPs about mutual patients with cancer. Additionally, previous research shows that many patients with cancer do not disclose their CM use to their conventional HCP for reasons such as lack of inquiry or anticipated disapproval . The potential role of CM practitioners in motivating disclosure of CM use by patients to their conventional HCPs remains unclear. This study therefore aims to assess CM practitioners’ experiences with interprofessional contact with conventional HCPs about mutual patients with cancer and the importance they attach to patient disclosure of CM use to their conventional HCP. Potential predictors for interprofessional contact will be explored. An online survey was administered among complementary medicine (CM) practitioners in the Netherlands. This study is part of a larger mixed-method research project titled ‘COMMON’ . Participants and sampling CM practitioners were eligible for participation if they (1) currently treated patients with cancer or cancer survivors and (2) were members of a professional association for CM practitioners. Membership in a professional association is an important quality criterion for CM practitioners in the Netherlands . To recruit participants, a combination of convenience and purposive sampling was used. Eight professional associations of CM practitioners were directly approached with the request to distribute a link to the online version of the survey among their members. One association did not respond to the request, seven associations agreed with distributing the survey link (see Additional file , Table ). The largest participating association ( n = 8858) was the Register for Complementary Medicine (RBCZ), an umbrella quality register for complementary medicine practitioners in the Netherlands. In addition, RBCZ requested 24 attached professional associations to distribute the link among their members (e.g. Dutch associations for naturopathy, psychology, homeopathy, shiatsu and reflexology). In response to the distributed survey link, two professional organizations approached us with the request to distribute the survey link among their members (i.e. snowball sampling). The average response rate among the seven actively approached professional associations was 9%. The number of members at time of survey administration of members attached to other associations is unknown, so a response rate could not be calculated. Materials and measures The survey was designed by the research team. First, the researchers (SvD, JJ, MB) defined important themes in a brainstorm session and subsequently created a first draft of the survey. This draft was piloted in a group of coresearchers, consisting of nine (former) patients with cancer. The improvements based on this pilot consisted of the addition of answer options for three survey questions and minor adjustments in sentencing to improve comprehensibility of the questions or answer options. The final survey consisted of 17 items, including both open-ended and closed questions (see Additional file for full survey). The first 10 items consisted of background characteristics of CM practitioners, such as demographics and the type of CM they provide to patients with cancer. To assess CM practitioner experiences with interprofessional contact, four items were included (e.g. contact frequency with conventional HCPs, experienced openness of conventional HCPs to communication). Two items consider the importance attached to patient disclosure about CM use. Last, a question about referral of patients with cancer to the CM practitioner was included. A link was created to direct participants to an online version of the survey. When For statistical analysis, SPSS version 27 was used. Data collection and analysis When opening the survey link, participants were first provided with information about the study, for instance about data use and expected time for survey completion (10–15 min). Participants were then asked to sign an online informed consent form and background characteristics were collected. If participants indicated that they did not treat patients with cancer or cancer survivors, they were thanked for their participation and excluded from the rest of the survey. The link to the online survey remained open for 2 months (Aug-Sep 2022). In the first week of September 2022, the approached participating professional organizations sent a reminder to their members about the survey. After finishing data collection, one researcher (MM) recoded the answers to open questions into relevant categories using qualitative analysis. Because of the large amount of categories for type of cancer of visiting patients, type of CM modality provided and type of symptom treated, only the five most common categories were reported in the section. Question 11 (“When you provide therapy to patients who have/had cancer, in general how often do you have contact with doctors or nurses who treat the patient?”) was recoded into three categories. The first category (‘no’) consisted of participants who indicated that they never have contact with conventional HCPs about their mutual patients with cancer. The second category (‘yes’) comprised participants who indicated to have contact with conventional HCPs during patient treatment, independent of the contact frequency. Answers that did not fit into these two categories (e.g. contact only through patients) were categorized as ‘other’. It was decided to exclude question 17 (“How do patients who have/had cancer get to visit you?”) from analysis because its answer categories were not mutually exclusive and the word ‘referral’ was not clearly defined in the answer options. Descriptive statistics were used to present the data on background characteristics, experiences of CM practitioners with interprofessional contact and the importance they attach to patient disclosure of CM use. To explore factors that predict contact between CM practitioners and conventional HCPs, a logistic regression analysis (two-sided, p < .05) was performed in consultation with a statistician. The dependent variable ‘interprofessional contact’ (Q11) was recoded into a binary variable (yes/no) by excluding the ‘other’ category. Of the available variables, six seemed relevant and appropriate as predictor. The predictor ‘sex’ (Q2) was also recoded into a binary variable (male/female) by removing the category ‘other’, which consisted of only four participants. For each predictor, the largest category was used as a reference. CM practitioners were eligible for participation if they (1) currently treated patients with cancer or cancer survivors and (2) were members of a professional association for CM practitioners. Membership in a professional association is an important quality criterion for CM practitioners in the Netherlands . To recruit participants, a combination of convenience and purposive sampling was used. Eight professional associations of CM practitioners were directly approached with the request to distribute a link to the online version of the survey among their members. One association did not respond to the request, seven associations agreed with distributing the survey link (see Additional file , Table ). The largest participating association ( n = 8858) was the Register for Complementary Medicine (RBCZ), an umbrella quality register for complementary medicine practitioners in the Netherlands. In addition, RBCZ requested 24 attached professional associations to distribute the link among their members (e.g. Dutch associations for naturopathy, psychology, homeopathy, shiatsu and reflexology). In response to the distributed survey link, two professional organizations approached us with the request to distribute the survey link among their members (i.e. snowball sampling). The average response rate among the seven actively approached professional associations was 9%. The number of members at time of survey administration of members attached to other associations is unknown, so a response rate could not be calculated. The survey was designed by the research team. First, the researchers (SvD, JJ, MB) defined important themes in a brainstorm session and subsequently created a first draft of the survey. This draft was piloted in a group of coresearchers, consisting of nine (former) patients with cancer. The improvements based on this pilot consisted of the addition of answer options for three survey questions and minor adjustments in sentencing to improve comprehensibility of the questions or answer options. The final survey consisted of 17 items, including both open-ended and closed questions (see Additional file for full survey). The first 10 items consisted of background characteristics of CM practitioners, such as demographics and the type of CM they provide to patients with cancer. To assess CM practitioner experiences with interprofessional contact, four items were included (e.g. contact frequency with conventional HCPs, experienced openness of conventional HCPs to communication). Two items consider the importance attached to patient disclosure about CM use. Last, a question about referral of patients with cancer to the CM practitioner was included. A link was created to direct participants to an online version of the survey. When For statistical analysis, SPSS version 27 was used. When opening the survey link, participants were first provided with information about the study, for instance about data use and expected time for survey completion (10–15 min). Participants were then asked to sign an online informed consent form and background characteristics were collected. If participants indicated that they did not treat patients with cancer or cancer survivors, they were thanked for their participation and excluded from the rest of the survey. The link to the online survey remained open for 2 months (Aug-Sep 2022). In the first week of September 2022, the approached participating professional organizations sent a reminder to their members about the survey. After finishing data collection, one researcher (MM) recoded the answers to open questions into relevant categories using qualitative analysis. Because of the large amount of categories for type of cancer of visiting patients, type of CM modality provided and type of symptom treated, only the five most common categories were reported in the section. Question 11 (“When you provide therapy to patients who have/had cancer, in general how often do you have contact with doctors or nurses who treat the patient?”) was recoded into three categories. The first category (‘no’) consisted of participants who indicated that they never have contact with conventional HCPs about their mutual patients with cancer. The second category (‘yes’) comprised participants who indicated to have contact with conventional HCPs during patient treatment, independent of the contact frequency. Answers that did not fit into these two categories (e.g. contact only through patients) were categorized as ‘other’. It was decided to exclude question 17 (“How do patients who have/had cancer get to visit you?”) from analysis because its answer categories were not mutually exclusive and the word ‘referral’ was not clearly defined in the answer options. Descriptive statistics were used to present the data on background characteristics, experiences of CM practitioners with interprofessional contact and the importance they attach to patient disclosure of CM use. To explore factors that predict contact between CM practitioners and conventional HCPs, a logistic regression analysis (two-sided, p < .05) was performed in consultation with a statistician. The dependent variable ‘interprofessional contact’ (Q11) was recoded into a binary variable (yes/no) by excluding the ‘other’ category. Of the available variables, six seemed relevant and appropriate as predictor. The predictor ‘sex’ (Q2) was also recoded into a binary variable (male/female) by removing the category ‘other’, which consisted of only four participants. For each predictor, the largest category was used as a reference. In total, 1961 participants gave informed consent for participation, of which 17 participants were excluded because they were not members of a professional association (see Figs. ) and 458 participants because they indicated that they did not treat patients with cancer or cancer survivors. Eventually, 1486 participants were included. Most participating CM practitioners were female (82%), with a mean age of 56.9 years (SD = 8.1) (see Table ). Years of experience treating patients with cancer ranged from 0 to 45 years, with a mean of 11.4 years (SD = 8.5). On average, CM practitioners reported being visited by 3 to 4 patients with cancer per month. Experiences with interprofessional contact Half of the surveyed CM practitioners indicated that they do not have contact with conventional HCPs (see Table ). 40% of the participants had occasional or frequent contact with conventional HCPs of patients with cancer. CM practitioners who gave other answers for instance indicated that contact with the conventional HCP only takes place through the patient. More than one-third of the CM practitioners (35%) did not experience conventional HCPs to be open to interprofessional communication. If CM practitioners communicated with conventional HCPs, this was most frequently by phone (36%). CM practitioners reached out to conventional HCPs to report the treatment plan (27%) or treatment progress (32%). This was sometimes preceded by a referral from a conventional HCP, as appeared from the answers to this open-ended question. In many cases, respondents mentioned that they do not receive a response from the conventional HCP to their report. In other cases (21%), contact between CM practitioners and conventional HCPs consisted of joint coordination, for instance by discussing contraindications for CM use. Importance of patient disclosure of CM use The majority (82%) of the CM practitioners indicated that they consider it important that patients disclose their CM use to their conventional HCP and that approximately half of the CM practitioners always motivate their patients to do so. CM practitioners who gave other answers frequently mentioned that patients were anxious to disclose CM use to their conventional HCP. Predictors of interprofessional contact The explorative, multivariate logistic regression model shows three significant predictors of interprofessional contact with conventional HCPs as reported by CM practitioners (see Table ). CM practitioners with more years of experience in treating patients with cancer were significantly more likely to have contact with conventional HCPs (OR = 1.05, 95% CI 1.04–1.06, p < .001), although the effect was small. Compared to CM practitioners who experience conventional HCPs as not being open to communication with them, CM practitioners who experience conventional HCPs as open to communication are significantly more likely to have interprofessional contact (OR = 8.12, 95% CI 5.12–12.86, p < .001). This also applies to CM practitioners who gave other answers (e.g. experienced openness of HCPs is situation-dependent), who are more likely to have contact with conventional HCPs compared to CM practitioners who experience conventional HCPs as not open (OR = 2.54, 95% CI 1.82–3.54, p < .001). CM practitioners who have no opinion on the experienced openness of HCPs are significantly less likely to have interprofessional contact with conventional HCPs compared to CM practitioners who experience HCPs as not open to communication (OR = 0.66, 95% CI 0.47–0.92, p < .05). CM practitioners who consider patient disclosure of CM use to their conventional HCP quite or little important are less likely to have contact with conventional HCPs of the patient compared to CM practitioners who consider patient disclosure of CM use very important (OR = 0.70, 95%CI 0.51–0.96, p < .01/OR = 0.39, 95%CI 0.23–0.68, p < .001). Half of the surveyed CM practitioners indicated that they do not have contact with conventional HCPs (see Table ). 40% of the participants had occasional or frequent contact with conventional HCPs of patients with cancer. CM practitioners who gave other answers for instance indicated that contact with the conventional HCP only takes place through the patient. More than one-third of the CM practitioners (35%) did not experience conventional HCPs to be open to interprofessional communication. If CM practitioners communicated with conventional HCPs, this was most frequently by phone (36%). CM practitioners reached out to conventional HCPs to report the treatment plan (27%) or treatment progress (32%). This was sometimes preceded by a referral from a conventional HCP, as appeared from the answers to this open-ended question. In many cases, respondents mentioned that they do not receive a response from the conventional HCP to their report. In other cases (21%), contact between CM practitioners and conventional HCPs consisted of joint coordination, for instance by discussing contraindications for CM use. The majority (82%) of the CM practitioners indicated that they consider it important that patients disclose their CM use to their conventional HCP and that approximately half of the CM practitioners always motivate their patients to do so. CM practitioners who gave other answers frequently mentioned that patients were anxious to disclose CM use to their conventional HCP. The explorative, multivariate logistic regression model shows three significant predictors of interprofessional contact with conventional HCPs as reported by CM practitioners (see Table ). CM practitioners with more years of experience in treating patients with cancer were significantly more likely to have contact with conventional HCPs (OR = 1.05, 95% CI 1.04–1.06, p < .001), although the effect was small. Compared to CM practitioners who experience conventional HCPs as not being open to communication with them, CM practitioners who experience conventional HCPs as open to communication are significantly more likely to have interprofessional contact (OR = 8.12, 95% CI 5.12–12.86, p < .001). This also applies to CM practitioners who gave other answers (e.g. experienced openness of HCPs is situation-dependent), who are more likely to have contact with conventional HCPs compared to CM practitioners who experience conventional HCPs as not open (OR = 2.54, 95% CI 1.82–3.54, p < .001). CM practitioners who have no opinion on the experienced openness of HCPs are significantly less likely to have interprofessional contact with conventional HCPs compared to CM practitioners who experience HCPs as not open to communication (OR = 0.66, 95% CI 0.47–0.92, p < .05). CM practitioners who consider patient disclosure of CM use to their conventional HCP quite or little important are less likely to have contact with conventional HCPs of the patient compared to CM practitioners who consider patient disclosure of CM use very important (OR = 0.70, 95%CI 0.51–0.96, p < .01/OR = 0.39, 95%CI 0.23–0.68, p < .001). This study examined the experiences of CM practitioners with contact with conventional HCPs in oncology and the importance CM practitioners attach to patient disclosure of CM use to their conventional HCP. Potential predictors for interprofessional contact were explored. In total, 40% of the surveyed CM practitioners ( n = 1486) indicated that they occasionally or frequently have contact with conventional HCPs of patients with cancer. The emergence of interprofessional contact seems to be mainly predicted by the extent to which CM practitioners experience conventional HCPs to be open to interprofessional communication. Most CM practitioners (82%) consider it important that patients with cancer disclose CM use to their conventional HCP and motivate their patients to disclose CM use. In a previous survey, 18% of CM practitioners reported to have previously communicated with conventional HCPs . The surveyed CM practitioners in the current study reported a much higher prevalence of previous contact with conventional HCPs, which might be explained by the frequent use (51%) of CM by patients with cancer . The study results indicate that the CM practitioner is mostly the initiator of contact by reporting the treatment plan or treatment progress. The study of Schiff et al. showed that most physicians and CM practitioners feel that the CM practitioner should initiate interprofessional communication. Only a minority of the surveyed CM practitioners experienced conventional HCPs to be open to communication with them. This perceived lack of openness is in line with the reported skepticism towards and lack of knowledge on complementary medicine among conventional HCPs in oncology . However, since conventional HCPs were not surveyed in the current study, our findings do not reflect the actual openness of conventional HCPs to communication with CM practitioners. Previous studies showed that conventional HCPs find interprofessional communication less important and are less supportive of opportunities to improve interprofessional communication when compared to CM practitioners . Nurses were more supportive than medical doctors , implying that nurses could play a pivotal role in bridging the communication gap between conventional HCPs and CM practitioners. A notable finding is that almost one-third of the surveyed CM practitioners reported having no opinion on their experience of openness of conventional HCPs to communication. Additionally, it was shown that these CM practitioners were significantly less likely to have contact with conventional HCPs compared to CM practitioners who experienced conventional HCPs as not open to communication. This could imply that these CM practitioners did not consider interprofessional contact relevant. The relevance of interprofessional contact between CM practitioners and conventional HCPs is situation dependent, e.g. in the case of cancer survivors who have completed treatment. Another possibility is that CM practitioners who indicated to have no opinion on the openness of conventional HCPs, have treated few cancer patients yet, making them unable to properly evaluate this topic. Indeed, the results showed that years of experience in treating patients of the CM practitioner was significantly associated with contact with conventional HCPs. The role of CM practitioners in the patient disclosure of CM use to their HCP is an understudied topic in existing literature. The present study shows that a large majority of CM practitioners attach importance to patient disclosure of CM use and motivate their patients to discuss CM use with their conventional HCPs. The importance a CM practitioner attaches to patient disclosure of CM use to their conventional HCP can reflect how relevant they consider it that the conventional HCP is informed. Indeed, the results of this study showed that perceived importance of patient disclosure of CM use predicts whether a CM provider has contact with conventional HCPs. CM practitioners highlighting the importance and encouraging a patient to discuss CM use could facilitate patient disclosure of CM use, which is reportedly hindered by a lack of inquiry by the healthcare provider, anticipation of disapproval by the healthcare provider or the perception that disclosing CM use is not relevant or patient’s . In the current study, experience with patients being anxious to disclose CM use to their conventional HCP was also reported in open-ended questions by the surveyed CM practitioners. The specific situations in which contact between CM practitioners and conventional HCPs is relevant should be explored in a follow-up study. Nonetheless, it is important for HCPs to be aware of patient CM use since it can provide valuable medical information about the patient and their (unsolved) complaints. In addition, complementary medicine use may indicate dissatisfaction with conventional care [1]. Patients are often given the responsibility of informing the conventional HCP on their CM use. It is questionable whether patients should bear this responsibility, especially when it concerns the safety of combining CM with conventional anticancer treatment. For optimal monitoring of the health and safety of patients with cancer, there should be open communication about CM use between all parties involved: conventional HCPs, CM practitioners and the patient. This will prevent the disappearance of valuable medical information in the metaphorical “Bermuda Triangle” between the three parties . Strengths and limitations This study is, to the best of our knowledge, the first to describe CM practitioners’ experiences with contact with conventional HCPs in oncology. To overcome sampling bias and include different types of CM practitioners, we approached an umbrella quality register. Although the average response rate among members of actively approached professional organizations was low (9%), the total sample size is large enough to outline the experiences of CM practitioners with interprofessional contact. The 9% response rate might have resulted in bias, for instance complementary medicine practitioners more willing to communicate with conventional healthcare providers responded, resulting in an overestimation of interprofessional contact. Furthermore, some types of CM practitioners are overrepresented in the sample, such as acupuncturists, because their professional associations were directly approached for survey distribution. In addition, most participants were females with a high education level. Whether this is representative of the population of CM practitioners in the Netherlands is not clear because sufficient oversight is lacking. In a comparable survey conducted in an oncology setting in Norway, the CM practitioners visited by patients with cancer were also predominantly female . The sex of a CM practitioner was no significant predictor for contact with conventional HCPs. Some limitations are associated with the survey. The fact that proportionately many participants chose the ‘other’ category for multiple-choice questions could indicate that the existing answer options were not sufficient. Respondents who answered in the ‘other’ categories often mentioned that they could not provide an unequivocal answer to the question posed because it was situation dependent. For example, experienced openness varies by HCP, or the relevance of interprofessional contact varies by patient. In addition, it was possible to proceed with the next question without answering the previous question, resulting in missing values. Future studies The current study only highlighted the perspective of CM practitioners on interprofessional contact. Future research should focus on the needs and desired roles of conventional HCPs and patients in the process of interprofessional contact. It is unclear how patients feel about their intermediary role between CM practitioners and HCPs. Given that interprofessional communication is often a non-routinized, unstructured process, the appropriate method, frequency and content of communication should be further explored. For instance, it could be explored amongst conventional healthcare providers what type of information about complementary medicine use of their patients is of relevance, such as indication, content or outcomes of treatment by the complementary medicine practitioner. In addition, the factors that determine the openness of HCPs as experienced by CM practitioners could be investigated more in depth, for example by means of interviews. This study is, to the best of our knowledge, the first to describe CM practitioners’ experiences with contact with conventional HCPs in oncology. To overcome sampling bias and include different types of CM practitioners, we approached an umbrella quality register. Although the average response rate among members of actively approached professional organizations was low (9%), the total sample size is large enough to outline the experiences of CM practitioners with interprofessional contact. The 9% response rate might have resulted in bias, for instance complementary medicine practitioners more willing to communicate with conventional healthcare providers responded, resulting in an overestimation of interprofessional contact. Furthermore, some types of CM practitioners are overrepresented in the sample, such as acupuncturists, because their professional associations were directly approached for survey distribution. In addition, most participants were females with a high education level. Whether this is representative of the population of CM practitioners in the Netherlands is not clear because sufficient oversight is lacking. In a comparable survey conducted in an oncology setting in Norway, the CM practitioners visited by patients with cancer were also predominantly female . The sex of a CM practitioner was no significant predictor for contact with conventional HCPs. Some limitations are associated with the survey. The fact that proportionately many participants chose the ‘other’ category for multiple-choice questions could indicate that the existing answer options were not sufficient. Respondents who answered in the ‘other’ categories often mentioned that they could not provide an unequivocal answer to the question posed because it was situation dependent. For example, experienced openness varies by HCP, or the relevance of interprofessional contact varies by patient. In addition, it was possible to proceed with the next question without answering the previous question, resulting in missing values. The current study only highlighted the perspective of CM practitioners on interprofessional contact. Future research should focus on the needs and desired roles of conventional HCPs and patients in the process of interprofessional contact. It is unclear how patients feel about their intermediary role between CM practitioners and HCPs. Given that interprofessional communication is often a non-routinized, unstructured process, the appropriate method, frequency and content of communication should be further explored. For instance, it could be explored amongst conventional healthcare providers what type of information about complementary medicine use of their patients is of relevance, such as indication, content or outcomes of treatment by the complementary medicine practitioner. In addition, the factors that determine the openness of HCPs as experienced by CM practitioners could be investigated more in depth, for example by means of interviews. To conclude, interprofessional contact with conventional HCPs occurs but is not a standard routine for most CM practitioners. More than one-third of the surveyed CM practitioners experienced conventional HCPs as not open to communication with them. The openness of conventional HCPs as experienced by CM practitioners appeared to significantly determine whether interprofessional contact occurs. Most CM practitioners considered patient disclosure of CM use to their conventional HCP to be important. Open communication about the topic of CM use between CM practitioners, conventional HCPs and patients prevents overlooking relevant medical information and facilitates optimal monitoring of the health condition and safety of patients with cancer. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
Evaluation of Microvascular Density in Glioblastomas in Relation to p53 and Ki67 Immunoexpression | 165b2ca7-8c6d-44c7-b9cf-04e95895af80 | 11204252 | Anatomy[mh] | Diffuse gliomas, originating from glial stem or progenitor cells , represent the most frequent tumors of the central nervous system in adults . Glioblastomas, known as astrocytic glioma, grade 4, according to the WHO CNS classification , represent the most aggressive and lethal form of primary intracranial malignant tumor in adults . The incidence rate of glioblastomas is 3.19–4.17/100,000 person-years , accounting for 50.1% of all malignant tumors of the central nervous system in the USA . The median survival rate ranges between 12 and 15 months after diagnosis , with the median age of patients being approximately 64 years , and a higher incidence in men (men/women = 1.5:1) . The diagnosis of glioblastomas, according to WHO CNS, is based on histological and molecular criteria, classified into the category of diffuse astrocytomas with the highest degree of malignancy (grade 4). Tumor grading is subject to histopathological characteristics such as increased cell density, cyto-nuclear atypia, increased mitotic activity, microvascular proliferation, and necrosis. The presence of at least one of the latter two criteria is mandatory to define grade 4 . Depending on the presence of isocitrate dehydrogenase mutation 1 ( IDH1 ), according to WHO 2016, glioblastomas can be subclassified into primary glioblastoma or the wild type ( IDH1 wild type), representing approximately 90% of cases, predominantly in elderly patients, and secondary glioblastoma or IDH1 mutant (less than 10% of cases), developed on the background of a low-grade glioma, which preferably occurs in young individuals , associated with a more favorable survival rate . Glioblastomas have rich but inefficient vascularization, characterized by hypoxia. Hypoxia and inadequate nutrient supply favor the appearance of angiogenic factors, such as vascular endothelial growth factor (VEGF) or platelet-derived growth factor (PDGF), leading to the formation of new vascular networks . Tumoral angiogenesis represents the development of new blood vessels and has been recognized as a distinctive sign of malignant tumors . The degree of angiogenesis, studied as microvascular density (MVD), impacts the progression and invasive nature of the tumor . It is now recognized that tumors present alternative mechanisms of vascularization, such as vascular mimicry and the transdifferentiation pathway of tumor cells into endothelial cells. Vascular mimicry represents a model of functional microcirculation generated by tumor cells, lacking endothelial lining, demonstrated by immunohistochemical studies with various endothelial markers, such as CD34 or CD105 . Anti-angiogenic treatments (bevacizumab) and surgical resection followed by temozolomide and radiotherapy have not achieved the expected effect and have not contributed to improving patient survival in the case of glioblastomas . CD34 is a marker of endothelial progenitor cells, which plays a crucial role in regulating angiogenesis in glioblastomas. CD34 stimulates the development of a new network of blood vessels and promotes tumor proliferation and invasion, thus playing a role in worsening the prognosis . Increased CD34 expression in diffuse gliomas has been associated with tumor grade, with the CD34 expression in glioblastomas being higher than in low-grade gliomas (LGGs). However, no correlations have been described between CD34 and patient age or sex . CD105 or endoglin is a transmembrane protein located on the membrane of endothelial cells involved in the angiogenesis of immature vessels. Endothelial cells with active proliferative capacity show increased CD105 expression, which plays an essential role in controlling angiogenesis in glioblastomas, stimulating the development of new vascular networks . According to the literature data, the role of increased expressions of biomarkers involved in glioblastoma angiogenesis, such as CD34 or CD105, is currently not fully elucidated . Regarding microvascular density in a potential relationship with the Ki67 proliferation index and p53 mutation, data are controversial . Recent studies have shown that up to 60% of tumor endothelial cells express p53 protein concomitantly with glial tumor cells in glioblastoma. These results, together with other somatic mutations in the primary tumor, support the idea of transdifferentiation of endothelial cells from adjacent tumor glial cells . The aim of this study was to examine angiogenesis through tumor microcirculation. The microcirculation of glioblastomas was evaluated by histological quantification of the pan-endothelial marker expression, CD34, as well as the newly formed microvessel marker, CD105—endoglin. The immunohistochemical results obtained were correlated with the Ki-67 proliferation index and p53 protein immunoexpression, respectively, with IDH1 and alpha-thalassemia/intellectual disability, X-linked ( ATRX) mutational status. We included in our retrospective study 54 cases of glioblastomas. Regarding gender distribution, we found a slight predominance in males (29/54), with a sex ratio of 1.16, favoring males. The majority of patients were over 50 years old (35/54, 64.8%). Regarding laterality, both cerebral hemispheres were equally affected. Most cases were found to be IDH1 wild-type glioblastomas (47/54, 87%), ATRX wild type (39/54, 72%), and p53 wild type (43/54, 79%), with a Ki67 index value over 20% present in 35.19% (19/54) of cases. MVD-CD34 and MVD-CD105 Regarding tumor vascularization in glioblastomas, we observed that the number of neoformed blood vessels varied from case to case; the vessels were characterized by different shapes and calibers; endothelial cells showed modified morphology, associated with moderate to marked pleomorphism. Characteristic of glioblastomas, we also observed neovessels with a glomeruloid aspect associated with intense positivity for CD34 or CD105 in endothelial cells ( and ). In some cases, the immunoexpression of CD34 or CD105 in tumor endothelial cells was absent, and these markers did not highlight the analyzed vascular structures. Likely, these vascular structures support the existence of the vascular mimicry mechanism. Instead, some glial cells, likely tumor progenitor cells or those undergoing endothelial transdifferentiation, show positive immunostaining for CD34 . These tumor cells are located around neovessels, suggesting an active proliferation of vascular structures in the adjacent tumor stroma . The values of microvascular density quantified by the percentage of endothelial cells marked with CD34 ranged from 0.35% to 16.89%, representing all vessels in the tumor stroma, with an average value of 4.13%. The values of microvascular density determined by the CD105 antigen were within similar ranges as CD34, ranging from 0.33% to 19.32%, but with an average value of 3.76%. In our observations, the median vascular density in normal brain tissue was 0.09% as determined by CD34 immunohistochemical staining, and 0.04% as evaluated by CD105 ( and ). We could not demonstrate statistically significant differences between MVD-CD34 and MVD-CD105 values ( p = 0.58), but compared to a normal brain, microvascular density is significantly higher in glioblastomas ( p < 0.0001) . Only in 22.22% (12/54) of cases were higher levels than 5% of positive CD105 vascular areas recorded from the total examined tumor, and in the case of the CD34 marker, only in 31.48% (17/54) of cases. Regarding MVD-CD34 in the right cerebral hemisphere, most cases had values below 2% (12/27), while in tumors located in the left hemisphere, values above 5% predominated (11/27). MVD-CD34 values above 5% were more frequent in the temporal lobe (36.33%, 8/22) and frontal lobe (50%, 8/16), while those below 2% were more common in the parietal lobe (6/9) and temporal lobe (7/22). However, the highest MVD-CD34 values were recorded in the temporal lobe and the right hemisphere. In the parietal lobe (9/9) and occipital lobe (6/7), MVD-CD34 values were predominantly below 5%. It can be observed that involvement of the temporal and frontal lobes is associated with higher microvascular density compared to the parietal and occipital lobes, but we could not demonstrate a statistically significant association regarding MVD-CD34 and tumor location or laterality ( p = 0.17, p = 0.34) . In both sexes, in most cases, MVD-CD34 values were below 5%. The male-to-female ratio in cases with MVD-CD34 values below 5% was 1.17 (20/17), while in those with MVD-CD34 values above 5%, it was 1.12 (9/8). Most patients over 65 years old (55.5%, 10/18) recorded MVD-CD34 values below 2%, while most lesions with values above 5% developed in patients under 50 years old (31.8%, 7/19). Also, we could not demonstrate a statistically significant association between MVD-CD34 and the age and sex of the patients ( p = 0.24 and p = 0.97, respectively) . In the case of endoglin (CD105), we observed that MVD values ranging from 2% to 5% were more frequent in the left hemisphere (12/27), while in the right hemisphere, cases with MVD-CD105 values below 2% predominated (12/27). In the left hemisphere, cases with MVD-CD105 values above 5% were twice as frequent as in the contralateral hemisphere, but the highest value of MVD-CD105 was determined in a glioblastoma located in the right hemisphere. Regarding location, MVD-CD105 values above 5% were more frequent in the temporal lobe (7/22), while MVD-CD105 values below 2% were more frequent in the parietal lobe (6/9). MVD-CD105 values ranging from 2% to 5% were more frequent in the temporal lobe (11/22). The highest value of MVD-CD105 was recorded in the frontal lobe and the right hemisphere. No statistically significant association was observed between MVD-CD105 and tumor location and laterality ( p = 0.10, p = 0.26) . MVD-CD105 values ranging from 2% to 5% were more frequent in patients over 50 years old (17/35), compared to patients under 50 years old, who more frequently presented MVD-CD105 values below 2% (7/19). MVD-CD105 values below 5% were more frequently recorded in males, with a male-to-female ratio of 1.21 (23/19). No statistically significant association was observed between MVD-CD105 and the sex and age of the patients ( p = 0.31, p = 0.54) . Cases with IDH1 mutations present higher median microvascular density both through the CD34 marker, 3.75% (1.89 – 6.57), and through CD105, 3.92% (0.8 – 5.9), compared to cases where the mutation was not present (2.76% (1.48 – 6.05) and 2.89% (1.38 – 4.63), respectively). Thus, IDH1 mutant glioblastomas exhibit a more abundant and prolific microvascular density compared to wild-type ones, but the difference between these values is not statistically significant ( p = 0.50, p = 0.74). It is worth mentioning that microvascular density, both in primary and secondary glioblastomas, highlighted by CD34 and CD105 markers, recorded nearly equal mean values, with the ratio of MVD-CD105/MVD-CD34 in IDH1 wild-type and mutant tumors being 1.04 . Most cases of the IDH1 wild type present MVD-CD34 values below 2% (19/37), while most IDH1 mutant-type glioblastomas have MVD-CD34 values above 5% (3/7). Regarding CD105, cases with microvascular density ranging from 2% to 5% were found to be the most frequent among IDH1 wild-type glioblastomas (22/47). Cases with the highest vascular microdensities through CD105 were IDH1 wild-type glioblastomas, whereas the highest MVD-CD34 value was observed in the case of an IDH1 mutant glioblastoma. Additionally, the association between the expression of IDH1 markers and CD34 or CD105 did not prove to be statistically significant ( p = 0.7, p = 0.2) . In cases of ATRX mutant type, we observed higher median values of microvascular density both through the CD34 marker, 3.75% (2.11 – 7.69), and through CD105, 3.92% (1.03 – 6.04). In cases where the ATRX mutation was present, MVD-CD105 was slightly higher compared to MVD-CD34. We could not demonstrate statistically significant differences between the mean values of MVD-CD34 and MVD-CD105 in relation to the ATRX mutation ( p = 0.10 and p = 0.39, respectively) . The ratio of cases with MVD-CD34 below 2% was higher in ATRX wild-type glioblastomas (17/39), while the ratio of those with MVD-CD34 above 5% was higher in ATRX mutant glioblastomas (6/15). Most ATRX wild-type glioblastomas exhibit MVD-CD105 between 2 and 5% (19/39). There was no statistically significant association observed between ATRX and microvascular density analyzed through CD34 and CD105 markers ( p = 0.5 and p = 0.12, respectively) . Regarding microvascular density in relation to the p53 mutation, the median values of microvascular density were higher in wild-type glioblastomas, with higher values in MVD-CD105. The ratio between the median values of CD105-positive microvascular density among wild-type p53 tumors (2.9% (1.36 – 4.84)) and mutant p53 tumors (2.12% (1.48 – 4.87)) was 1.36, and for MVD-CD34, it was 1.09. Mutant p53 cases showed a more pronounced microvascular density through CD34 expression (2.6% (1.53 – 6.86)) compared to CD105 expression (2.12% (1.48 – 4.87)), while in wild-type p53 glioblastomas, the determined microvascular density was approximately equal through both CD105 (2.9% (1.36 – 4.84)) and CD34 (2.84% (1.36 – 6.05)). There was no statistically significant difference between the mean values of MVD through CD34 or CD105 in both mutant- and wild-type p53 cases ( p = 0.57 and 0.96, respectively) . Regarding MVD-CD34 with values below 2%, it was more frequent in wild-type p53 glioblastomas (17/43). MVD-CD105, with values of 2 – 5%, was more frequent in wild-type p53 tumors (20/43). In contrast, mutant p53 cases with MVD-CD105 values below 2% (5/11) were more frequent compared to MVD-CD34 (4/11) . It can be observed that through both MVD-CD34 and MVD-CD105, the highest values were recorded in patients with wild-type p53 glioblastomas . Regarding the Ki67 index, vascular proliferation through CD34 and CD105 immunoreactivity recorded higher median values in cases where the Ki67 index was above 20% compared to those below 5%. In cases with Ki67 below 5%, microvascular density showed higher proliferation through CD34 immunoreactivity (2.56 (1.16 – 3.89) compared to CD105 (2.39 (0.76 – 5.15). The highest median values of microvascular density, through CD34, 3.62% (1.57 – 8.63), were recorded in cases where the Ki67 index ranged between 5 and 20%, in contrast to CD105 immunoreactivity, where the median microvascular density was 2.39% (1.95 – 3.99) . Cases where the Ki67 index was over 20% showed a more pronounced microvascular density through CD105 (2.99 (1.73 – 4.84)) compared to CD34 (2.76 (1.48 – 6.05)); however, the difference between the mean values of MVD-CD34 and MVD-CD105 within the studied Ki67 proliferation index intervals was not statistically significant ( p = 0.39 and p = 0.7, respectively) . Among glioblastomas with MVD-CD34 under 5%, those with a Ki67 proliferative index under 5% were more frequent (15/37). In cases with MVD-CD34 over 5%, in the majority of cases (14/17), a Ki67 proliferative index over 5% was observed. A Ki67 proliferative index over 5% was observed in the majority of cases with MVD-CD105 under 2% (10/19), in 82.6% (19/23) of cases with MVD-CD105 between 2 and 5%, and in 58.3% (7/12) of cases with MVD-CD105 over 5%. We could not demonstrate a statistically significant association between the Ki67 index and the microvascular density markers CD34 and CD105 ( p = 0.38 and p = 0.26, respectively) . In this retrospective immunohistochemical study, we included 54 cases of glioblastomas, which predominantly developed in patients over 65 years old with a left hemisphere predominance. We correlated MVD-CD34 and MVD-CD105 with clinicopathological parameters and the immunohistochemical results obtained through p53 , Ki67, IDH1 , and ATRX antibodies. Most cases were found to be the IDH1 wild type, ATRX, and p53 wild type, with a Ki67 index over 20%. The mean values of microvascular density were higher for the CD34 marker; however, comparing them with those obtained for CD105, there were no statistically significant differences between them. We found that IDH1 and ATRX mutant glioblastomas, wild-type p53 , and those with a Ki67 index over 20% exhibit a more abundant and prolific microvascular density, with statistical correlations not reaching significant values. Globally, primary brain tumors represent the 17th most common type of cancer, with approximately 77% of them being of glial origin . Glioblastomas are one of the leading causes of tumor-related mortality worldwide. Molecularly, the mutational status of isocitrate dehydrogenase ( IDH1 ) has been demonstrated as a prognostic factor in primary glioblastomas but does not represent a predictive factor for immunotherapeutic treatment . According to data from the literature, the number of IDH1 mutant cases varies considerably. Similarly to our case series, in studies conducted by Martinez-Lage et al. and Munthe et al., the presence of the mutation ranged between 4.1 and 11.5% of cases, confirming the data predicted by WHO . In contrast to our results, in the literature, the number of IDH1 mutant glioblastomas varies between 22.91 and 38.5% . In the study conducted by Deacu et al., the patients’ survival was not influenced by the presence or absence of the IDH1 mutation . Additionally, there is still insufficient data regarding the relationship between the IDH1 mutation status of the tumor and tumor angiogenesis . The cellular and molecular mechanisms of angiogenesis in glioblastomas are currently strong points in the research field of new therapeutic targets . The mechanism of angiogenesis or vasculogenesis in glioblastomas has been described in numerous studies, but there are still considerable controversies. The cells responsible for tumor neovascularization can be endothelial cells derived from bone marrow or progenitor stem cells, through mechanisms not yet fully elucidated . Microvascular density can be an unfavorable prognostic factor in malignant gliomas . The heterogeneity of vascular morphology in glioblastomas is represented by a variety of vascular patterns relevant to clinical prognosis. In the study by Chen et al., microvascular density was studied based on four types of vascular patterns. The number of CD34-positive cells in microvascular sprouting (MS) and vascular cluster (VC) patterns was significantly lower than that in vascular garland (VG) and glomeruloid vascular patterns (GVPs), with median values of CD34 immunostaining ranging from 5.91 to 10.29 . In the study conducted by Jha et al., the microvascular density measured by CD34 ranged from 9.2% to 41.9% (HPF) and showed a statistically significant association with Ki-67 expression, unlike our results . In the study by Clara et al., the mean value of microvascular density studied by CD34 was 23.9%, while for CD105 it was only 8.9%, with the former being statistically significantly correlated with HIF, unlike our case where microvascular density was on average lower. The CD34/CD105 ratio was 2.68, higher than our results (1.06) . Moghaddam et al. recorded mean values of microvessel density with CD105 in neoplastic areas of 14.28%, compared to non-neoplastic areas, with a significant difference between them ( p = 0.012). The mean expression of the proliferation index (Ki-67) was 21.44%, with both markers, CD105 and CD31, correlating with Ki67 immunostaining, unlike our results . Similar to our study, Mikkelsen et al. demonstrated that CD105-MVD did not significantly correlate with endothelial cell density, with a mean value of 16.5% . McGahan et al., in their immunohistochemical study evaluating microvascular density, described a positive correlation between CD105, CD34 expression, and tumor-associated hemorrhage . Tamma et al., in their study, showed that p53 -negative tumor cells are positively correlated with CD34-positive endothelial cells. These data may confirm that the presence of immune and inflammatory cells in the tumor microenvironment contributes to tumor progression and angiogenesis . Tamma et al. analyzed CD34 immunoexpression in normal cerebral tissue compared to glioblastomas to establish differences in microvascular densities. They observed a significant increase in MVD-CD34 in tumor tissue, with a median value of 2.1%, compared to normal brain tissue, which had a median value of 0.58%. In our study, the median value of MVD-CD34 in glioblastoma tissue was 2.8%, while in normal brain tissue, it was 0.09% . The tumor microenvironment comprises several components. In our study, we analyzed the microvascular density through CD34 or CD105 immunostaining, in comparison with the presence or absence of p53 mutation in glial tumor cells, and found no statistically significant correlation. In the study conducted by Tamma et al., where p53 positivity was higher in tumor cells compared to normal brain tissue, no correlation was found between the tumor microenvironment and p53 mutation . In the study conducted by Alkhaibary et al., the Ki-67 index was proposed as a prognostic factor, but previous studies showed contradictory results. Regarding the importance of the Ki-67 proliferation index, it has been shown that a higher Ki-67 index predisposes to longer survival. Alkhaibary et al. found no statistically significant correlation between the Ki-67 index and survival rate . Similar to our study, Bastos et al. demonstrated higher levels of CD105 and Ki-67, which seem to be associated with more aggressive glioblastomas, but they did not record any statistically significant association between the two markers . Burghardt I et al., in their study, did not observe a statistically significant association between the survival rate and microvascular density, analyzed by CD105, regardless of their value, both in de novo and recurrent cases . Similar results were described by Mihic et al. . In contrast, Behrem et al. found a statistically significant correlation between the two biomarkers in a numerically similar sample to our study . Bastos et al. in their study on a cohort of patients treated with bevacizumab or temozolomide regardless of the measured microvascular density (CD105) did not record longer survival in any of the analyzed groups. They also did not observe differences between survival and location, and the Ki-67 index did not impact the prognosis . Regarding survival, some data have shown a correlation between increased microvessel density, evaluated by CD105 immunostaining (MVD-CD105), and a worse survival/prognosis of patients with glioblastomas characterized by increased MVD-CD105 , while others have not shown any significant association in the studied cohorts . Regarding the vascular invasion of glioblastomas, studies have shown differences between vascular patterns and MVD through CD105 immunostaining. Some authors suggest that cells in the infiltrative zone had a molecular composition showing the presence of immature vascular structures, alongside a minimal number of endothelial cells and low expression of VEGF receptors, compared to the tumor central zone where vessels had mature endothelial cells, implying a higher value for VEGF expression. This observation confers greater aggressiveness in tumors by MVD-CD105 from the central zone compared to the tumor periphery . In contrast, Bastos et al. found no association between MVD-CD105 analyzed in the central zone or in the peripheral zone compared to overall survival . Maddison et al. analyzed microvascular densities in both primary glioblastomas and recurrences and showed a decrease in total microvascular density, including endothelial cells analyzed by CD34 immunostaining, in recurrent glioblastomas. Moreover, they found a statistically significant decrease in terms of MVD-CD34 between de novo cases and recurrent cases . Several clinical studies have investigated the activity of anti-VEGF monoclonal antibodies in glioblastomas, presenting their limitations regarding survival. Resistance mechanisms to antiangiogenic therapy, including vessel co-option or hypoxic signaling, are associated with the tumor microenvironment by modulating glioma stem cells . The interactions between tumor glial cells and the tumor microenvironment, especially glioma stem cells, enhance the formation of neovascular structures from the tumor stroma, contributing to an unfavorable prognosis in terms of survival. Angiogenesis in glioblastomas is predominantly attributed to VEGF upregulation, which stimulates proliferation and migration or transdifferentiation of glioma cells into endothelial cells. Hypoxia-induced VEGF expression, due to the pronounced proliferation of tumor glial cells, subsequently stimulates the formation of poorly formed neovessels. However, these newly formed structures are not sufficient for adequate vascularization of tumor glial cells, so they migrate from hypoxic regions by invading healthy peritumoral tissue, while they may undergo transcriptional modifications that further increase resistance to therapy . Some studies have shown that anti-VEGF treatment with bevacizumab did not yield the desired results in the recurrence of glioblastomas, which is inevitable in most cases. Therefore, another approach is needed regarding halting tumor angiogenesis. TRC105, a new chemotherapeutic agent, induces antibody-dependent cellular cytotoxicity and apoptosis of human vascular endothelial cells and tumor cells positive for endoglin and inhibits angiogenesis in response to VEGF . Moreover, TRC105, being an antibody targeting CD105, seems to enhance the effect of bevacizumab in vivo, and is being considered an option for treatment with or without the administration of Bevacizumab . 4.1. Clinical Data In this retrospective study, fifty-four patients diagnosed with glioblastoma at the Pathology Department of the Emergency County Clinical Hospital Târgu Mureș, between February 2014 and December 2017, were included. The inclusion criteria were as follows: (1) histopathological confirmation of glioblastoma without prior diagnosis or oncologic treatment for any type of brain tumor; (2) no history of brain biopsy; and (3) availability of tumor tissue in at least two paraffin blocks for the determination of IDH1-R132H, ATRX, CD34, CD105, Ki67, and p53 immunoexpression. Histopathological diagnoses were re-evaluated by a neuropathologist in accordance with the World Health Organization’s classification of nervous system tumors elaborated in 2016. 4.2. Immunohistochemistry Operative samples were fixed in formalin, embedded in paraffin, and subsequently sectioned at a thickness of 3 μm. The obtained sections followed the standard deparaffinization and rehydration procedure. Endogenous peroxidase was blocked by applying a 10 min treatment with 3% H 2 O 2 . Antigen retrieval was performed by steam heat treatment for 25 min in a citrate buffer solution (pH 6). We used mouse monoclonal antibody IDH1R132H, IHC132 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:25, hPh, for 60 min; mouse monoclonal antibody ATRX, BSB-108 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:50, hPh, for 60 min; rabbit monoclonal antibody CD34, EP88 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:100, hPh, for 60 min; rabbit monoclonal antibody CD105, EP274 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:200, hPh, for 60 min; mouse monoclonal antibody Ki67, MM1 clone (Novocastra, Leica Biosystems, Deer Park, IL, USA), diluted at 1:150, hPh, for 60 min; and mouse monoclonal antibody p53, DO7 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:800, hPh, for 60 min. The EnVision Flex/peroxidase herring system (HRP) (Dako, Santa Clara, CA, USA, 30 min) was used for signal amplification, and 3,3’-diaminobenzidine (DAB) chromogen was used for primary antibody detection. Subsequently, slides were counterstained with hematoxylin. 4.3. Slide Evaluation A preliminary examination of the slides was performed using an Olympus BX46 microscope, and then the slides were scanned with a 3DHistech PANORAMIC 1000 scanner. Microvascular density was determined based on CD34 and CD105 immunoexpression. In the tumor tissue, four areas with the highest densities of blood vessels were selected, initially with a low-power objective (×40) and then using a higher-power objective (×400). Objective quantification of the percentage of microvascular density in the tumor stroma and in the normal brain tissue located next to the tumor tissue was performed using the Slideview program (3DHistech, Budapest, Hungary), and the mean values obtained for the four analyzed areas were calculated. We considered immunohistochemical reaction positive if solitary or grouped endothelial cells, whether or not involved in vascular lumen formation, showed a positive reaction. The endogen immunohistochemical control was considered for CD34 and CD105 of the endothelial cells in normal brain tissue vessels . IDH1 mutation expression was determined by quantifying positively stained cytoplasmic tumor cells, regardless of color intensity. Cases in which ≥10% of cells were stained were defined as positive ( IDH1 mutant), while cases where this value did not exceed 10% of tumor cells were considered negative ( IDH1 wild type) . For the ATRX marker, ATRX gene mutations are followed by a loss of nuclear immunoexpression in over 50% of tumor cells ( ATRX loss– ATRX mutant type). If ATRX immunoexpression remains preserved in over 50% of tumor cells, the respective tumor is considered the ATRX wild type, with the endogenous positive control being endothelial cells . The Ki-67 proliferation index was determined as the percentage of positively stained tumor cells (regardless of intensity) relative to 1000 cells. The presence of p53 was determined using the percentage of immunopositive cells relative to 200 cells in 5 fields. We considered negative immunoexpression if immunostaining was present in <10% of tumor cells (wild type) and positive if >10% of examined tumor cells were immunopositive (mutant type) . The interpretation of immunohistochemical results was supervised by a neuropathologist. 4.4. Statistical Analysis Descriptive and inferential statistics were performed. The normality of the distribution of continuous variables was tested using the Shapiro–Wilk test. Continuous variables were expressed as a median (25th percentile, 75th percentile), and medians were compared using the Mann–Whitney test. Categorical variables were displayed as numbers, and between-group comparisons were performed using the chi-square test. A value of p < 0.05 was considered significant. The IBM SPSS Statistics 22.0 program (IBM Corporation, Armonk, NY, USA) software was used for statistical analyses. 4.5. Study Limitation The main limitation of our study is represented by the small sample size. 4.6. Ethics Committee This study was approved by the Ethics Committee of the County Emergency Clinical Hospital Târgu Mureș (24494/16.10.2020). In this retrospective study, fifty-four patients diagnosed with glioblastoma at the Pathology Department of the Emergency County Clinical Hospital Târgu Mureș, between February 2014 and December 2017, were included. The inclusion criteria were as follows: (1) histopathological confirmation of glioblastoma without prior diagnosis or oncologic treatment for any type of brain tumor; (2) no history of brain biopsy; and (3) availability of tumor tissue in at least two paraffin blocks for the determination of IDH1-R132H, ATRX, CD34, CD105, Ki67, and p53 immunoexpression. Histopathological diagnoses were re-evaluated by a neuropathologist in accordance with the World Health Organization’s classification of nervous system tumors elaborated in 2016. Operative samples were fixed in formalin, embedded in paraffin, and subsequently sectioned at a thickness of 3 μm. The obtained sections followed the standard deparaffinization and rehydration procedure. Endogenous peroxidase was blocked by applying a 10 min treatment with 3% H 2 O 2 . Antigen retrieval was performed by steam heat treatment for 25 min in a citrate buffer solution (pH 6). We used mouse monoclonal antibody IDH1R132H, IHC132 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:25, hPh, for 60 min; mouse monoclonal antibody ATRX, BSB-108 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:50, hPh, for 60 min; rabbit monoclonal antibody CD34, EP88 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:100, hPh, for 60 min; rabbit monoclonal antibody CD105, EP274 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:200, hPh, for 60 min; mouse monoclonal antibody Ki67, MM1 clone (Novocastra, Leica Biosystems, Deer Park, IL, USA), diluted at 1:150, hPh, for 60 min; and mouse monoclonal antibody p53, DO7 clone (BioSB, Santa Barbara, CA, USA), diluted at 1:800, hPh, for 60 min. The EnVision Flex/peroxidase herring system (HRP) (Dako, Santa Clara, CA, USA, 30 min) was used for signal amplification, and 3,3’-diaminobenzidine (DAB) chromogen was used for primary antibody detection. Subsequently, slides were counterstained with hematoxylin. A preliminary examination of the slides was performed using an Olympus BX46 microscope, and then the slides were scanned with a 3DHistech PANORAMIC 1000 scanner. Microvascular density was determined based on CD34 and CD105 immunoexpression. In the tumor tissue, four areas with the highest densities of blood vessels were selected, initially with a low-power objective (×40) and then using a higher-power objective (×400). Objective quantification of the percentage of microvascular density in the tumor stroma and in the normal brain tissue located next to the tumor tissue was performed using the Slideview program (3DHistech, Budapest, Hungary), and the mean values obtained for the four analyzed areas were calculated. We considered immunohistochemical reaction positive if solitary or grouped endothelial cells, whether or not involved in vascular lumen formation, showed a positive reaction. The endogen immunohistochemical control was considered for CD34 and CD105 of the endothelial cells in normal brain tissue vessels . IDH1 mutation expression was determined by quantifying positively stained cytoplasmic tumor cells, regardless of color intensity. Cases in which ≥10% of cells were stained were defined as positive ( IDH1 mutant), while cases where this value did not exceed 10% of tumor cells were considered negative ( IDH1 wild type) . For the ATRX marker, ATRX gene mutations are followed by a loss of nuclear immunoexpression in over 50% of tumor cells ( ATRX loss– ATRX mutant type). If ATRX immunoexpression remains preserved in over 50% of tumor cells, the respective tumor is considered the ATRX wild type, with the endogenous positive control being endothelial cells . The Ki-67 proliferation index was determined as the percentage of positively stained tumor cells (regardless of intensity) relative to 1000 cells. The presence of p53 was determined using the percentage of immunopositive cells relative to 200 cells in 5 fields. We considered negative immunoexpression if immunostaining was present in <10% of tumor cells (wild type) and positive if >10% of examined tumor cells were immunopositive (mutant type) . The interpretation of immunohistochemical results was supervised by a neuropathologist. Descriptive and inferential statistics were performed. The normality of the distribution of continuous variables was tested using the Shapiro–Wilk test. Continuous variables were expressed as a median (25th percentile, 75th percentile), and medians were compared using the Mann–Whitney test. Categorical variables were displayed as numbers, and between-group comparisons were performed using the chi-square test. A value of p < 0.05 was considered significant. The IBM SPSS Statistics 22.0 program (IBM Corporation, Armonk, NY, USA) software was used for statistical analyses. The main limitation of our study is represented by the small sample size. This study was approved by the Ethics Committee of the County Emergency Clinical Hospital Târgu Mureș (24494/16.10.2020). This study has highlighted a variety of percentage ranges of vascular microdensity evaluated by CD34 and CD105 immunohistochemical expressions, without their correlation with p53 and Ki-67 in primary or secondary glioblastomas, in the studied geographical area. Since multidisciplinary therapeutic strategies, especially antiangiogenic ones, are under evaluation and standardization for glioblastomas, further targeted molecular studies are needed in the future. |
Do fiber tips with different geometric designs affect organic tissue loss in laser-activated irrigation of teeth with immature apex? An in vitro study | e73277a0-cf08-46c5-84fa-4a32e0f7333d | 11828837 | Dentistry[mh] | Trauma, caries, and developmental malformations such as dens invaginatus or dens evaginatus may result in pulp necrosis and arrest of root maturation in young permanent teeth . Thus, teeth with immature apex, necrotic pulp, and thin and fragile dentin walls are difficult to manage . Due to the thin dentin walls in teeth with immature apex, disinfection is provided by irrigation solutions and intracanal medicaments rather than mechanical instrumentation . Therefore, irrigation activation methods are recommended to increase the effectiveness of irrigation solutions in these teeth . However, irrigation activation methods may cause apical extrusion of irrigation solutions in teeth with immature apex . Apical extrusion of irrigation solutions negatively affects healing and regeneration by damaging periapical tissues and stem cells. Therefore, searching for the ideal irrigation activation method that causes minimal apical extrusion and tissue loss while providing maximum disinfection efficacy in teeth with immature apexes continues . Standard needle irrigation (SNI) is commonly used today due to its ease of application and low cost. However, irrigation solutions show low efficacy in SNI due to limited penetration into the dentinal tubules. In addition, irrigation solutions may extrude apically. Therefore, using different irrigation activation methods is recommended . Another method used to increase the efficacy and distribution of the irrigation solution is laser-activated irrigation (LAI). Er: YAG (2940 nm) lasers are commonly used in LAI due to their suitable wavelength and high absorption in water. In recent years, Photon Induced Photoacoustic Streaming (PIPS) (PIPS ® , Fotona, Ljubljana, Slovenia, EU), a method of LAI using Er: YAG lasers, emitted in super short pulses (0.3 W, 15 Hz, 50 µs) leading to high power density, very low-temperature evaporation, and low sub ablative power (10–20 mJ) has been developed . PIPS aims to generate cavitation and photoacoustic shock waves in irrigation solutions to create a strong, three-dimensional flow through the root canal system without increasing temperature . For the placement of the PIPS irrigation activation tip, there is less need for root canal enlargement. Therefore, it has been reported that it can effectively deliver irrigation solutions to the apical portion of the root canal system, isthmuses, lateral canals, and resorption areas . The most recent development of LAI in endodontics is the use of Er: YAG laser and Shock Wave Emission Enhanced Photoacoustic Streaming (SWEEPS) (SWEEPS ® , Fotona, Ljubljana, Slovenia, EU) using a 600 μm fiber tip inserted into the pulp chamber . The operating principle of SWEEPS is similar to PIPS (0.3 W, 15 Hz, 50 µs, 20 mJ), but the mode of action is different in the SWEEPS technique as synchronized ultra-short pulse pairs are delivered to the solutions. This feature increases shock wave emission even in the narrowest root canals . The amplification of pressure waves produced using SWEEPS has been reported to be greater than the standard PIPS procedure, which emits a single laser pulse . However, fiber tips with different geometries are designed for SWEEPS and PIPS modes. It has been reported that the geometry of the fiber tip affects the bubble shape. The formation of a spherical bubble in radial fiber tips and a channel-like bubble in flat tips has been reported . In the literature, a strong correlation between the geometric design of laser fiber tips and the efficiency of LAI has been reported. Gregorcic et al. emphasized that the design of laser fiber tips directly affects the fluid dynamics within the cavity and the collapse kinetics of cavitation bubbles, determining the intensity and effectiveness of the photoacoustic effects generated during LAI. Specifically, it has been reported that radial fiber tips enhance irrigation efficiency by distributing energy over a wider area along the root canal. In contrast, flat tips deliver localized, high-intensity energy, ensuring effective disinfection of the root canal walls. These findings demonstrate that the geometric design of laser fiber tips plays a critical role in the efficiency of irrigation and the removal of filling materials within the root canal system . In the literature, no study is evaluating the organic tissue loss efficiency of the apically extruded irrigation solution in the periapical area when using SNI and LAI with fiber tips of different geometries in teeth with simulated immature apex. The aim of this study is to quantitatively evaluate the effect of irrigation activation performed with LAI tips of different geometric designs on organic tissue loss in the periapical area of teeth with immature apex. The study’s null hypothesis was that there would be no difference between SNI and LAI final irrigation activation methods with fiber tips with different geometries regarding organic tissue loss in the periapical area in teeth with simulated immature apex. The study design was approved in accordance with the Declaration of Helsinki by the University of Health Sciences Gülhane Scientific Research Ethics Committee (no: 2024-328). The sample size was calculated based on a power analysis using G*Power software 3.1.2 (Universitat, Düsseldorf, Germany) with an alpha error probability of 0.05 and a power of 80% (effect size = 0.25) concerning a recent study of similar design . The power analysis showed that a minimum of 15 samples per group and 75 samples were statistically necessary. For all these reasons, 75 bovine mucosa fragments, and 15 single-rooted, single-canal human mandibular premolar teeth with no internal or external resorption, no coronal caries, and restorations, no cracks and fractures, and no previous root canal treatment, which were extracted for periodontal reasons independently of the study, were included in the study. Since a non-destructive method was to be used, the same 15 teeth were used in all experimental groups, following a recent study of a similar design , and the obstacle of anatomical variables was avoided. Periapical radiographs from the buccolingual and mesiodistal angles confirmed that the teeth met the inclusion criteria, had a single root and canal, and were free of internal/external resorption. The included teeth were soaked in 5.25% sodium hypochlorite (NaOCl) (Cerkamed, Cerkamed Company, Stalowa Wola, Poland) for two days to dissolve organic tissue debris. The tissue residues on the teeth were removed with a periodontal curette. The teeth were stored in 0.1% thymol solution until used in the study. Access cavities were prepared using a high-speed rotary instrument and a diamond bur. After ensuring apical patency with a #10 K-file (Perfect, Shenzhen, China), the working length was determined to be 1 mm short of the apical foramen. At the specified working length, root canals were prepared to X3 using ProTaper Next (Dentsply Maillefer, Ballaigues, Switzerland). All files were utilized at the torque and rpm values specified by the manufacturer. Following each file change, the root canals were irrigated for 30 s with 2 mL of 2% NaOCl (Cerkamed) using a 30G side-vented needle (Ultradent, South Jordan, UT, USA). To standardize the crown and root lengths, the teeth were marked 5 mm coronally and 11 mm apically from the cementoenamel junction and sectioned at these reference points using diamond disc (Bredent, Senden, Germany), resulting in all samples being standardized to 5 ± 1 mm crown length and 11 ± 1 mm root length. In each sample, an artificial pulp chamber, the reservoir area for irrigation solutions, was prepared in the coronal 5 ± 1 mm section of the canal using a diamond bur . The apical opening was designed to be 1.5 mm in size to simulate an immature apex . For this purpose, Gates Glidden (VDW, Munich, Germany) burs from #1 to #6 (1.5 mm) were used. The root canals were irrigated with 2% NaOCl (Cerkamed), 17% EDTA (Cerkamed), and distilled water, respectively. The experimental model (Fig. ) was prepared concerning the method described by Ribeiro et al. . Two layers of overlapping wax sheets with a diameter of 5 mm and a length of 3 mm were placed on the apical part of the teeth and adapted to the tooth with the help of a heated spatula. The tooth’s root was coated with varnish and immersed in acrylic resin. The experimental setup was placed in ice water until the polymerization was complete to prevent the exothermic reaction from melting the wax. The tooth was marked tangentially to the border of the acrylic. After removal from the acrylic container, the apical wax layer was removed. A second mark parallel to the first mark was drawn 2 mm apically. The tissue for simulating periapical tissues was obtained by removing a full-thickness flap from bovine palates obtained from the slaughterhouse and stored in bovine mucosa − 18 °C until use. During the experiment, bovine mucosa with a diameter of 5.5 mm and a height of 5 mm was prepared for each sample and thawed in saline at room temperature for 30 min. All prepared bovine mucosae were weighed on a precision balance (average 70–80 mg). Bovine mucosae were placed in the experimental model. The bovine mucosae were reduced in size with a scalpel until the second parallel line drawn on the tooth was tangential to the acrylic. Bovine mucosae with a distance of less than 2 mm between the first mark drawn on the tooth and the acrylic were removed from the experiment. The bovine mucosae were dried with blotting papers, weighed 3 times on a precision balance, and averaged (mg). This measurement was recorded as the initial weight. The bovine mucosa was placed in the periapical area created in the experimental model, with the epithelium facing the acrylic and the connective tissue facing the root. The tooth was repositioned using a Universal Tester (Instron Corp, Canton, MA) with a compressive force equivalent to 25 gf until the first drawn line was tangent to the acrylic edge. The acrylic and root interface were sealed with a gingival barrier (OpalDam, Ultradent Products, Inc, USA) to maintain a constant back pressure of the tissue on the apex during irrigation. A #100 plugger (Dentsply Maillefer, Ballaigues, Switzerland) was placed up to the apex to compress the portion of bovine mucosae that had entered the canal . The bovine mucosa samples were numbered from 1 to 75. Randomization was performed using computer-generated sequences ( www.random.org ), resulting in a table that assigned randomized sample numbers to five groups, each containing 15 samples ( n = 15): Standard needle irrigation (SNI), PIPS-flat (F), PIPS-radial (R), SWEEPS-flat (F) and SWEEPS-radial (R). SNI 30G side-vented needle (Ultradent) was placed in the root canal 1 mm shorter than the working length. 5 ml of 2% NaOCl (Cerkamed) was applied for 30 s using 3–4 mm amplitude. Thus, the first activation cycle was completed. The same procedure was applied in the second and third activation cycles. After three activation cycles, final irrigation was performed using 5 ml distilled water. PIPS-F Fotona Light Walker Er: YAG laser was set according to the manufacturer’s instructions at 20 mJ, 15 Hz, 0.3 W in SSP (Super Short Pulse) mode with water and air turned off. PIPS (Fotona, Ljubljana, Slovenia) flat fiber tip (400/14) (Fig. , A) was placed in the reservoir area. Three activation cycles of 30 s each were performed using the solutions in the order and volume indicated on the SNI. PIPS-R Without changing the parameters, the PIPS radial fiber tip (400/14) (Fig. , B) was inserted into the reservoir field, and the final irrigation activation was performed according to the procedure specified in PIPS F. SWEEPS-F The Fotona Light Walker Er: YAG laser was set according to the manufacturer’s instructions at 20 mJ, 15 Hz, 0.3 W in auto-SWEEPS mode with water and air off. SWEEPS (Fotona, Ljubljana, Slovenia) flat fiber tip (300/9) (Fig. , C) was placed in the reservoir area. Three activation cycles of 30 s each were performed using the solutions in the order and volume indicated on the SNI. SWEEPS-R The SWEEPS radial fiber tip (600/9) (Fig. , D) was inserted into the reservoir field without changing the parameters. The final irrigation activation was performed according to the procedure specified in SWEEPS-F. In PIPS-F, PIPS-R, SWEEPS-F, and SWEEPS-R groups, 2% NaOCl (Cerkamed) was continuously added to the reservoir area during activation . After the irrigation activation procedures, bovine mucosae were removed from the experimental model with the help of a fine-tipped tweezer and dried with blotting papers. The dried bovine mucosae were weighed with precision balance three times and averaged (mg). This measurement was recorded as the final weight. The amount (mg) of tissue loss was calculated by subtracting the final measurement from the initial measurement. All procedures were performed by a single endodontist (H.K.). Statistical analysis Shapiro-Wilk test was applied to confirm the normality of the data obtained. Since the data were normally distributed, the weight change was analyzed using One-Way ANOVA and post-hoc Tukey tests. All statistical analyses were performed using SPSS version 23 (IBM, Armonk, NY, USA). p < 0.05 was considered statistically significant. 30G side-vented needle (Ultradent) was placed in the root canal 1 mm shorter than the working length. 5 ml of 2% NaOCl (Cerkamed) was applied for 30 s using 3–4 mm amplitude. Thus, the first activation cycle was completed. The same procedure was applied in the second and third activation cycles. After three activation cycles, final irrigation was performed using 5 ml distilled water. Fotona Light Walker Er: YAG laser was set according to the manufacturer’s instructions at 20 mJ, 15 Hz, 0.3 W in SSP (Super Short Pulse) mode with water and air turned off. PIPS (Fotona, Ljubljana, Slovenia) flat fiber tip (400/14) (Fig. , A) was placed in the reservoir area. Three activation cycles of 30 s each were performed using the solutions in the order and volume indicated on the SNI. Without changing the parameters, the PIPS radial fiber tip (400/14) (Fig. , B) was inserted into the reservoir field, and the final irrigation activation was performed according to the procedure specified in PIPS F. The Fotona Light Walker Er: YAG laser was set according to the manufacturer’s instructions at 20 mJ, 15 Hz, 0.3 W in auto-SWEEPS mode with water and air off. SWEEPS (Fotona, Ljubljana, Slovenia) flat fiber tip (300/9) (Fig. , C) was placed in the reservoir area. Three activation cycles of 30 s each were performed using the solutions in the order and volume indicated on the SNI. The SWEEPS radial fiber tip (600/9) (Fig. , D) was inserted into the reservoir field without changing the parameters. The final irrigation activation was performed according to the procedure specified in SWEEPS-F. In PIPS-F, PIPS-R, SWEEPS-F, and SWEEPS-R groups, 2% NaOCl (Cerkamed) was continuously added to the reservoir area during activation . After the irrigation activation procedures, bovine mucosae were removed from the experimental model with the help of a fine-tipped tweezer and dried with blotting papers. The dried bovine mucosae were weighed with precision balance three times and averaged (mg). This measurement was recorded as the final weight. The amount (mg) of tissue loss was calculated by subtracting the final measurement from the initial measurement. All procedures were performed by a single endodontist (H.K.). Shapiro-Wilk test was applied to confirm the normality of the data obtained. Since the data were normally distributed, the weight change was analyzed using One-Way ANOVA and post-hoc Tukey tests. All statistical analyses were performed using SPSS version 23 (IBM, Armonk, NY, USA). p < 0.05 was considered statistically significant. In the current study, the amount of organic tissue loss in PIPS-R was found to be significantly higher compared to PIPS-F ( p < 0.05). However, there was no significant difference in the amount of periapical organic tissue loss among all other tested irrigation activation methods ( p > 0.05) (Table ). In teeth with immature apex, the thin dentin walls limit the application of mechanical preparation . Therefore, disinfection of the root canal system in these teeth depends more on irrigation and intracanal medicament application . For this purpose, using different irrigation activation methods to increase the effectiveness of irrigation solutions is recommended in the literature. However, in teeth with immature apex, the wide, apical area is a potential risk factor for the apical extrusion of irrigation solutions. It has been reported that solutions extruded from the apical area cause pain, burning, and periapical tissue damage. For these reasons, the search for the ideal disinfection method to minimize the extrusion of irrigation solutions continues . In this study, we investigated the effect of SNI and LAI with different tip geometries (flat and radial) on the amount of sodium hypochlorite-induced organic tissue loss extruded apically in teeth with simulated immature apex. The null hypothesis of this study is rejected because there is a difference between the irrigation activation methods tested in terms of the amount of organic tissue loss in the periapical area. Standardizing variables other than those tested in in vitro studies is essential for an accurate evaluation. In the literature, it has been reported that using the same group of teeth and having similar apical diameter and root canal size to ensure standardization in vitro studies on teeth with immature apex planned to undergo regenerative endodontic procedures . In the present study, mandibular premolars with similar root-to-crown ratios and similar anatomy were used to simulate teeth with immature apices. To achieve dimensional standardization of the teeth, a certain amount of dental tissue was removed from the crowns and apical roots of all teeth, and in the procedure described by Zhabuawala et al. , the apical diameter was standardized. In the literature, studies evaluating apical extrusion from different perspectives, such as irrigant and organic loss, have noted that variations in root canal anatomy can lead to potential biases. Many studies have adopted a methodology using the same teeth across all experimental groups to prevent such biases . Similarly, our study employed this approach to minimize the effects of anatomical variations. However, using the same teeth in different groups may lead to changes in dentin buffering capacity. This issue arises because dentin’s potential to interact with irrigants and its buffering properties may be influenced by prior treatments, representing a limitation in our study. Therefore, future research should aim to develop alternative methods that maintain anatomical consistency while eliminating the potential effects on dentin buffering capacity. Similarly, it has been reported that the concentration of the irrigation solution used, the volume of the solution, and the activation time are among the critical variables that affect the results of endodontic treatment . Therefore, all activation methods tested in this study used the same volume of solution with the same concentration, and activation was performed simultaneously. However, in LAI, as opposed to SNI, the volume of solution in the reservoir is carried to the external environment by the effect of the laser, so the continuous solution was added to the reservoir area, similar to previous studies in the literature . The ethical concerns surrounding the use of human palatal mucosa in research have necessitated the evaluation of alternative biological materials. In this context, despite differences in tissue properties as well as physical and chemical structures, bovine palatal mucosa was selected for this study. The structural similarities and ease of availability of bovine mucosa make it a scientifically appropriate and practical option as an experimental model . It was determined that none of the irrigation activation methods tested in the study could prevent the extrusion of the irrigation solution from the apical area. The present study found no difference between SNI and PIPS regarding organic tissue loss in the periapical area. In the literature, no study compares the effect of SNI and LAI performed with different geometry tips on the amount of organic tissue loss in the periapical area. Therefore, the findings of this study were compared with studies evaluating the effect of different irrigation activation methods on irrigation solution and debris extrusion. In contrast to the findings of our study, Azim et al. reported that PIPS caused more solution extrusion than SNI in an experimental root socket model in single-rooted mature teeth. Although the activation time was the same, this difference may be due to the concentration (3%), volume (3 mL), experimental setup, and evaluation methods. Similar to the present study’s findings, Arslan et al. reported no difference between SNI and PIPS with different power settings (0.3–0.9 W) in a modified model using single-rooted mature mandibular premolar teeth. Ince Yusufoglu et al. reported no difference in debris extrusion between SNI and PIPS with a power setting similar to our study (0.3 W) in molars with moderate curvature. In PIPS, cavitation and photoacoustic shock waves are generated in the irrigation solutions, resulting in a powerful three-dimensional flow through the root canal system without temperature increase . The effect of this solid three-dimensional flow in the irrigation solution may have played a role in the fact that SNI and PIPS caused similar solution extrusion in our study. The present study found no difference between SNI and SWEEPS regarding the amount of organic tissue loss. In contrast to the findings of our study, Vatanpour et al. reported that SNI caused more solution extrusion in immature molars than SWEEPS used at a power setting similar to our study (0.3 W). Abat et al. reported that SWEEPS used at a power setting similar to our study (0.3 W) caused a more significant amount of solution extrusion compared to SNI in a regenerative endodontic procedure by applying 1% 20 mL sodium hypochlorite in three-dimensional immature tooth models with an apical opening of 1.5 mm. These differences may be due to methodological variables such as the morphology of the teeth used in the studies, preparation size, concentration, and volume of solutions. In agreement with the findings of our study, in contrast to these findings, Genç Şen et al. reported no difference in solution extrusion between SNI and SWEEPS in single-rooted teeth of working length and over instrumentation. Since SWEEPS sends synchronized ultrashort pulse pairs to solutions, it increases shock wave emission even in the narrowest root canals . Increased shock wave emission may have played a role in the extrusion of a similar solution caused by SNI and SWEEPS. In our study, there was no difference in the amount of sodium hypochlorite-induced organic tissue loss extruded apically between PIPS and SWEEPS using radial and flat tips. In contrast to these findings, Snjaric et al. reported that PIPS caused less solution extrusion than SWEEPS after 60 s LAI using 3% sodium hypochlorite in single-rooted mature teeth. This may be due to differences in the concentration of irrigation solution used in the study, tooth morphologies, and laser application time. Consistent with the findings of the present study, Bolhari et al. reported no difference between PIPS and SWEEPS in terms of the amount of methylene blue extruded apically after photodynamic therapy by applying methylene blue in single-rooted mature premolars. No study evaluating the effect of tip design on the organic tissue dissolving efficiency of PIPS and SWEEPS was found in the literature. Although the pulse modes of PIPS and SWEEPS are different, the fact that the energy used, power setting, solution volume, and activation time are the same may have played an essential role in obtaining this result. Although there was no difference between SWEEPS-F and SWEEPS-R in the present study, it was found that PIPS-R caused more organic tissue loss than PIPS-F. Gregorcic et al. studied the dynamic effect of fiber tip geometries of Er: YAG lasers on liquids and reported that a spherical bubble was formed at radial fiber tips. In contrast, a channel-like bubble was formed at flat fiber tips. He also stated that the total mechanical energy of the liquids is equal to the power in the initial expansion of the bubbles. This energy is more significant when radial fiber ends are used and smaller when flat fiber ends are used. However, as the tip diameters increase, the energy of the bubble formed increases . Therefore, the difference between PIPS-F and PIPS-R may be due to the shape of the bubble formed in the solution. This study demonstrates that SNI and LAI final irrigation activation methods cannot effectively prevent the apical extrusion of irrigation solutions in teeth with immature apices, potentially resulting in complications such as periapical tissue irritation. The findings reveal that PIPS-R tips cause more significant tissue damage than PIPS-F tips, while SWEEPS produces consistent outcomes irrespective of tip design. Based on these results, it is recommended that irrigation activation methods be applied with precision and control in treating teeth with immature apex, emerging technologies be critically evaluated, and approaches to minimize potential complications be adopted. This study has certain limitations. Foremost among these is the in vitro design, which does not adequately replicate clinical factors such as tissue healing, immune responses, and the long-term effects of irrigant extrusion on surrounding tissues. The use of bovine palatal mucosa instead of human palatal mucosa introduces potential differences in physical, biological, and histological properties that may influence tissue response. Furthermore, while facilitating the standardization of anatomical variables, the exclusive use of mandibular premolars may have inadvertently overlooked anatomical variations specific to other tooth types. The consistent use of the same teeth in the study may also introduce bias due to the buffering capacity of dentin. To overcome these limitations, advanced studies are needed that better mimic clinical conditions, evaluate a broader range of tooth types, and assess various irrigation activation methods. Future research should focus on evaluating the long-term clinical effects of different irrigation activation methods on periapical tissue healing and regeneration. Studies comparing the use of various lasers at different power settings and fiber tips of varying diameters and lengths under clinical conditions are essential to obtain a more comprehensive understanding of the efficacy and safety of these methods. Additionally, exploring alternative disinfection techniques to reduce apical irrigant extrusion could enhance treatment outcomes. Expanding research to include diverse tooth types and patient-specific variables, such as wound healing potential, would further enable these findings to address a broader range of clinical contexts. |
Herbal medicines for SOD1 | 25f3b67a-77ef-45f7-8b12-f76ef9884821 | 11442286 | Pharmacology[mh] | Introduction Amyotrophic Lateral Sclerosis(ALS)is characterized by dysfunction of upper motor neurons and lower motor neurons, affecting the medulla, cervical, thoracic, and/or lumbar segments. This results in progressive weakening of voluntary skeletal muscles, leading to symptoms such as limb movement impairment, swallowing difficulty, speech problems (dysarthria), and respiratory dysfunction . The median survival time of ALS is reported as 20 to 48 months after the onset of symptoms, among which 90% to 95% are sporadic ALS, and 5% to 10% of patients are familial ALS . As ALS remains incurable, treatment is focused on using disease modifying therapies and maximizing quality of life . Some countries have approved riluzole and edaravone as medications for slowing the progression of ALS. Riluzole, an anti-glutamate agent, prolongs survival in ALS patients in clinical trials and post-marketing analyses, but whether this occurs in all stages of ALS or only in advanced disease remains controversial . Some studies have reported that people with ALS who meet certain criteria may benefit from the use of edaravone, which has antioxidant properties . However, possibly because the study design lacked general applicability to the wider population of ALS patients, post-marketing analyses have raised questions about the safety and benefits of edaravone. As a result, the use of edaravone is still controversial and does not yet have regulatory approval around the world . An increasing number of people with ALS resort to HMs because of the modest benefits of current therapies. In China and several other Asian countries, HM is widely used alongside Western Medicine (WM), with both systems cooperating to provide healthcare services for the population. However, the diverse responses to HM continue to be a subject of ongoing debate and challenge . In recent decades, numerous studies have investigated the effectiveness of HMs in treating ALS. Previous systematic reviews have indicated that short-term adjunctive use of HM may improve ALS Functional Rating Scale (ALSFRS) scores and clinical outcomes, with a favorable safety profile compared to placebo or riluzole alone. However, further research is required to evaluate the long-term efficacy of patient-oriented outcomes . Additionally, there is very low to low-quality evidence indicating that HMs may produce superior treatment responses for ALS without an increased risk of adverse events . Nevertheless, with their widespread use, HMs have attracted both praise and criticism. A single-center cohort study found that certain HMs were associated with a poorer prognosis in ALS patients . ALS is a fatal central nervous system neurodegenerative disease. At present, the etiology and pathogenesis of ALS are still unclear. Evidence from clinical studies suggests that dysregulated immune responses contribute to heterogeneity in the clinical presentation of ALS . Immune inflammation caused by abnormal immune disorders, such as microglial activation, astrocyte proliferation and T cell infiltration, can be observed at the site of motor neuron degeneration , and immune cell infiltration can accelerate disease progression , All these suggest that abnormal immune disorders play an important role in the occurrence and development of ALS, and this treatment direction will be an effective treatment strategy. Exactly, in animal models of ALS and in vitro , the effects of HMs have been consistently praised. This phenomenon creates confusion among researchers and patients regarding the efficacy of HM in ALS studies, leading to questions about its utility and study design implications . The efficacy and mechanisms of HMs for experimental ALS have not been systematically evaluated yet. In addition, preclinical systematic review of animal data can provide preclinical evidence for the potential translational value from animal models to human disease. Thus, the present study aims to evaluate the efficacy and immunologic mechanisms of HMs through experimental ALS animal models. Methods 2.1 Systematic analysis 2.1.1 Approach We followed the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) . There was no need for ethical approval because this was a literature research. 2.1.2 Data sources and search strategy Two experienced researchers (YJL and WJY) independently carried out extensive searches for studies on HM for ALS. We searched the following electronic databases from their inception until April 10, 2024: PubMed, EMBASE, Web of Science, Cochrane Library, Wan fang database, Vasoactive Intestinal Polypeptide (VIP), China National Knowledge Infrastructure (CNKI), and Sinomed. The following keywords were used for the preclinical evidence: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“Amyotrophic Lateral Sclerosis” OR “motor neuron disease” OR “Gehrig’s Disease” OR “Motor System Diseases”). Moreover, reference lists of potential articles were searched for relevant studies. All the studies included were limited on animals . 2.1.3 Inclusion and exclusion criteria Literature screening was conducted collaboratively by a minimum of two members of our research team. In the initial screening phase, a comprehensive plan for full-text screening was meticulously devised. Each researcher independently assessed the abstracts and methodologies of the literature, initially selecting relevant articles. Subsequently, selected literature underwent thorough full-text review, with articles meeting the criteria being ultimately chosen. The screening results were then integrated and consolidated by the research team members, leading to the creation of corresponding flowcharts. In cases of differing opinions among team members during the screening process, careful negotiation and discussion were undertaken to reach a unanimous final decision. The studies meeting the inclusion criteria were included in the meta-analysis: (1) Studies of HMs for ALS; (2) Inclusion of studies with Riluzole, Edaravone, normal saline and distilled water as control groups; (3) HM as monotherapy was used in the intervention group; (4) Identified SOD1 G93A transgene mouse; (5) The primary outcome measures were onset time or survival time, while the secondary outcome measure was stride length and duration time. When the mice on the transfer bar movement for the longest time less than 5 min record the day as the onset time . After mice will lie down if it cannot turn to normal within 30 seconds gesture, determine its death, the death date of record this day for mice . Exclusion criteria for the studies were as follows: (1) Lack of a control group; (2) Case reports, clinical experiments, reviews; (3) Cell experiments; (4) Repeated publications or studies with missing data. 2.1.4 Data extraction Two separate authors independently extracted the following details from the included studies: (1) The primary author’s name and the year of publication; (2) The specific details of animals for each study, including species, quantity, gender; (3) Accepted ALS mouse model; (4) The specifics of the treatment group, including the dosage, administration method of the therapeutic drug, treatment duration, and corresponding details for the control group; and (5) Efforts were made to contact authors for supplementary information when certain publications contained only graphical data. In instances of no response, numerical data were extracted from graphs using digital ruler software. 2.1.5 Study quality and risk of bias In the methodology section of this study, we will employ two evaluation tools, CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) and SYRCLE’s ROB tool (Systematic Review Centre for Laboratory animal Experimentation’s Risk of Bias) , to assess the quality and risk of bias in the included studies. CAMARADES will be used to evaluate the experimental design and methodological quality of the studies, such as sample size, randomization, and blinding. SYRCLE’s ROB will be used to assess the risk of bias in each study during the experimental process, including selective reporting and sample size calculation. By utilizing these two evaluation tools in conjunction, we aim to comprehensively assess the quality and risk of bias in animal experimental studies, providing stronger support for the interpretation and generalization of research results. CAMARADES primarily focused on ten aspects of the literature: (1) Publication in a peer-reviewed journal; (2) statement of temperature control; (3) randomization to treatment group; (4) allocation concealment; (5) blinded assessment of outcome; (6) avoidance of anesthetics with known notable intrinsic neuroprotective properties; (7) Use appropriate ALS animal models; (8) sample size calculation; (9) compliance with animal welfare regulations; (10) declared any potential conflict of interest. The scoring method of this scale involves assigning scores on a scale ranging from 0 to 10. In contrast, SYRCLE’s ROB primarily focused on ten aspects of the literature: (1) Randomization (selection bias); (2) Random sequence generation (selection bias); (3) Baseline characteristics (selection bias); (4) Allocation concealment; (5) Random housing (performance bias); (6) Blinding of personnel (performance bias); (7) Random outcome assessment (detection bias); (8) Blinding of outcome assessment (detection bias); (9) Incomplete outcome data (attrition bias); (10) Selective reporting (reporting bias). According to the scoring method of this scale, scores are assigned on a scale from 0 to 10. 2.1.6 Statistical analysis Review Manager 5.4.1 statistical software was used for meta-analysis of the included literature. Stata 15 software is used to conduct sensitivity analysis to assess the robustness of data analysis. The outcome measures selected in our study include: onset time, survival time, stride length, and disease duration. All outcome measures are continuous. For these variables, we applied Weighted Mean Difference (WMD) or Standardized Mean Difference (SMD) as aggregate statistics. Each effect size included a 95% confidence interval (95% CI), and a combined p-value ≤ 0.05 was deemed statistically significant. 2.2 Scoping review 2.2.1 Approach The scoping review adhered to the Joanna Briggs Institute Methodological Guidelines and reporting of this review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) checklist .There was no need for ethical approval because this was a literature research. 2.2.2 Data sources and search strategy Two experienced researchers (YJL and LJJ) independently conducted comprehensive searches for studies on the immune mechanism effects of HM, using the PubMed and CNKI databases from their inception until April 10, 2024. The following keywords were used for the possible mechanisms of immunology: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“immune response” OR “immune dysregulation” OR “immunologic mechanism” OR “Immunological reaction” OR “Immune Processes”). both in English and Chinese. There is no limitation on language or publication type. We also screened the references of included studies to ensure that no eligible studies were missed by the search strategy . 2.2.3 Inclusion and exclusion criteria In our study, two team members (YJL and WJY) collaborated on literature screening. We meticulously planned the full-text screening process. Each researcher independently evaluated abstracts and methodologies to find relevant articles. Selected literature underwent a rigorous full-text review, advancing only if meeting our criteria. We synthesized the screening results and illustrated them with concise flowcharts. In case of disagreements, we discussed until reaching a consensus to ensure the integrity of our decisions. For inclusion in the scoping review, articles must focus on the mechanisms of HM in treating ALS, with a particular emphasis on immune mechanisms. The search will encompass all types of qualitative, quantitative, and mixed-method studies, irrespective of their design, and will not be restricted by language. Publications were excluded if they did not discuss the results of ALS or HM. Additionally, publications focusing solely on WM treatment and not on the mechanism of herbal treatment for ALS were excluded. Finally, publications meeting the aforementioned criteria but reporting non-primary research, such as editorials, letters, concept papers, review articles, unpublished literature, dissertations, books, and book studies, were also excluded. 2.2.4 Data extraction We collected data using specially designed extraction forms. The following information was recorded for each study: (a) author, (b) year of publication,(c) journal, (d) country, (e) type of studies, (f) Involved mechanisms (g)Involved HMs Two researchers performed the data extraction and synthesis processes independently (YJL and LJJ). A third researcher resolved any disagreement. Systematic analysis 2.1.1 Approach We followed the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) . There was no need for ethical approval because this was a literature research. 2.1.2 Data sources and search strategy Two experienced researchers (YJL and WJY) independently carried out extensive searches for studies on HM for ALS. We searched the following electronic databases from their inception until April 10, 2024: PubMed, EMBASE, Web of Science, Cochrane Library, Wan fang database, Vasoactive Intestinal Polypeptide (VIP), China National Knowledge Infrastructure (CNKI), and Sinomed. The following keywords were used for the preclinical evidence: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“Amyotrophic Lateral Sclerosis” OR “motor neuron disease” OR “Gehrig’s Disease” OR “Motor System Diseases”). Moreover, reference lists of potential articles were searched for relevant studies. All the studies included were limited on animals . 2.1.3 Inclusion and exclusion criteria Literature screening was conducted collaboratively by a minimum of two members of our research team. In the initial screening phase, a comprehensive plan for full-text screening was meticulously devised. Each researcher independently assessed the abstracts and methodologies of the literature, initially selecting relevant articles. Subsequently, selected literature underwent thorough full-text review, with articles meeting the criteria being ultimately chosen. The screening results were then integrated and consolidated by the research team members, leading to the creation of corresponding flowcharts. In cases of differing opinions among team members during the screening process, careful negotiation and discussion were undertaken to reach a unanimous final decision. The studies meeting the inclusion criteria were included in the meta-analysis: (1) Studies of HMs for ALS; (2) Inclusion of studies with Riluzole, Edaravone, normal saline and distilled water as control groups; (3) HM as monotherapy was used in the intervention group; (4) Identified SOD1 G93A transgene mouse; (5) The primary outcome measures were onset time or survival time, while the secondary outcome measure was stride length and duration time. When the mice on the transfer bar movement for the longest time less than 5 min record the day as the onset time . After mice will lie down if it cannot turn to normal within 30 seconds gesture, determine its death, the death date of record this day for mice . Exclusion criteria for the studies were as follows: (1) Lack of a control group; (2) Case reports, clinical experiments, reviews; (3) Cell experiments; (4) Repeated publications or studies with missing data. 2.1.4 Data extraction Two separate authors independently extracted the following details from the included studies: (1) The primary author’s name and the year of publication; (2) The specific details of animals for each study, including species, quantity, gender; (3) Accepted ALS mouse model; (4) The specifics of the treatment group, including the dosage, administration method of the therapeutic drug, treatment duration, and corresponding details for the control group; and (5) Efforts were made to contact authors for supplementary information when certain publications contained only graphical data. In instances of no response, numerical data were extracted from graphs using digital ruler software. 2.1.5 Study quality and risk of bias In the methodology section of this study, we will employ two evaluation tools, CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) and SYRCLE’s ROB tool (Systematic Review Centre for Laboratory animal Experimentation’s Risk of Bias) , to assess the quality and risk of bias in the included studies. CAMARADES will be used to evaluate the experimental design and methodological quality of the studies, such as sample size, randomization, and blinding. SYRCLE’s ROB will be used to assess the risk of bias in each study during the experimental process, including selective reporting and sample size calculation. By utilizing these two evaluation tools in conjunction, we aim to comprehensively assess the quality and risk of bias in animal experimental studies, providing stronger support for the interpretation and generalization of research results. CAMARADES primarily focused on ten aspects of the literature: (1) Publication in a peer-reviewed journal; (2) statement of temperature control; (3) randomization to treatment group; (4) allocation concealment; (5) blinded assessment of outcome; (6) avoidance of anesthetics with known notable intrinsic neuroprotective properties; (7) Use appropriate ALS animal models; (8) sample size calculation; (9) compliance with animal welfare regulations; (10) declared any potential conflict of interest. The scoring method of this scale involves assigning scores on a scale ranging from 0 to 10. In contrast, SYRCLE’s ROB primarily focused on ten aspects of the literature: (1) Randomization (selection bias); (2) Random sequence generation (selection bias); (3) Baseline characteristics (selection bias); (4) Allocation concealment; (5) Random housing (performance bias); (6) Blinding of personnel (performance bias); (7) Random outcome assessment (detection bias); (8) Blinding of outcome assessment (detection bias); (9) Incomplete outcome data (attrition bias); (10) Selective reporting (reporting bias). According to the scoring method of this scale, scores are assigned on a scale from 0 to 10. 2.1.6 Statistical analysis Review Manager 5.4.1 statistical software was used for meta-analysis of the included literature. Stata 15 software is used to conduct sensitivity analysis to assess the robustness of data analysis. The outcome measures selected in our study include: onset time, survival time, stride length, and disease duration. All outcome measures are continuous. For these variables, we applied Weighted Mean Difference (WMD) or Standardized Mean Difference (SMD) as aggregate statistics. Each effect size included a 95% confidence interval (95% CI), and a combined p-value ≤ 0.05 was deemed statistically significant. Approach We followed the guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) . There was no need for ethical approval because this was a literature research. Data sources and search strategy Two experienced researchers (YJL and WJY) independently carried out extensive searches for studies on HM for ALS. We searched the following electronic databases from their inception until April 10, 2024: PubMed, EMBASE, Web of Science, Cochrane Library, Wan fang database, Vasoactive Intestinal Polypeptide (VIP), China National Knowledge Infrastructure (CNKI), and Sinomed. The following keywords were used for the preclinical evidence: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“Amyotrophic Lateral Sclerosis” OR “motor neuron disease” OR “Gehrig’s Disease” OR “Motor System Diseases”). Moreover, reference lists of potential articles were searched for relevant studies. All the studies included were limited on animals . Inclusion and exclusion criteria Literature screening was conducted collaboratively by a minimum of two members of our research team. In the initial screening phase, a comprehensive plan for full-text screening was meticulously devised. Each researcher independently assessed the abstracts and methodologies of the literature, initially selecting relevant articles. Subsequently, selected literature underwent thorough full-text review, with articles meeting the criteria being ultimately chosen. The screening results were then integrated and consolidated by the research team members, leading to the creation of corresponding flowcharts. In cases of differing opinions among team members during the screening process, careful negotiation and discussion were undertaken to reach a unanimous final decision. The studies meeting the inclusion criteria were included in the meta-analysis: (1) Studies of HMs for ALS; (2) Inclusion of studies with Riluzole, Edaravone, normal saline and distilled water as control groups; (3) HM as monotherapy was used in the intervention group; (4) Identified SOD1 G93A transgene mouse; (5) The primary outcome measures were onset time or survival time, while the secondary outcome measure was stride length and duration time. When the mice on the transfer bar movement for the longest time less than 5 min record the day as the onset time . After mice will lie down if it cannot turn to normal within 30 seconds gesture, determine its death, the death date of record this day for mice . Exclusion criteria for the studies were as follows: (1) Lack of a control group; (2) Case reports, clinical experiments, reviews; (3) Cell experiments; (4) Repeated publications or studies with missing data. Data extraction Two separate authors independently extracted the following details from the included studies: (1) The primary author’s name and the year of publication; (2) The specific details of animals for each study, including species, quantity, gender; (3) Accepted ALS mouse model; (4) The specifics of the treatment group, including the dosage, administration method of the therapeutic drug, treatment duration, and corresponding details for the control group; and (5) Efforts were made to contact authors for supplementary information when certain publications contained only graphical data. In instances of no response, numerical data were extracted from graphs using digital ruler software. Study quality and risk of bias In the methodology section of this study, we will employ two evaluation tools, CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) and SYRCLE’s ROB tool (Systematic Review Centre for Laboratory animal Experimentation’s Risk of Bias) , to assess the quality and risk of bias in the included studies. CAMARADES will be used to evaluate the experimental design and methodological quality of the studies, such as sample size, randomization, and blinding. SYRCLE’s ROB will be used to assess the risk of bias in each study during the experimental process, including selective reporting and sample size calculation. By utilizing these two evaluation tools in conjunction, we aim to comprehensively assess the quality and risk of bias in animal experimental studies, providing stronger support for the interpretation and generalization of research results. CAMARADES primarily focused on ten aspects of the literature: (1) Publication in a peer-reviewed journal; (2) statement of temperature control; (3) randomization to treatment group; (4) allocation concealment; (5) blinded assessment of outcome; (6) avoidance of anesthetics with known notable intrinsic neuroprotective properties; (7) Use appropriate ALS animal models; (8) sample size calculation; (9) compliance with animal welfare regulations; (10) declared any potential conflict of interest. The scoring method of this scale involves assigning scores on a scale ranging from 0 to 10. In contrast, SYRCLE’s ROB primarily focused on ten aspects of the literature: (1) Randomization (selection bias); (2) Random sequence generation (selection bias); (3) Baseline characteristics (selection bias); (4) Allocation concealment; (5) Random housing (performance bias); (6) Blinding of personnel (performance bias); (7) Random outcome assessment (detection bias); (8) Blinding of outcome assessment (detection bias); (9) Incomplete outcome data (attrition bias); (10) Selective reporting (reporting bias). According to the scoring method of this scale, scores are assigned on a scale from 0 to 10. Statistical analysis Review Manager 5.4.1 statistical software was used for meta-analysis of the included literature. Stata 15 software is used to conduct sensitivity analysis to assess the robustness of data analysis. The outcome measures selected in our study include: onset time, survival time, stride length, and disease duration. All outcome measures are continuous. For these variables, we applied Weighted Mean Difference (WMD) or Standardized Mean Difference (SMD) as aggregate statistics. Each effect size included a 95% confidence interval (95% CI), and a combined p-value ≤ 0.05 was deemed statistically significant. Scoping review 2.2.1 Approach The scoping review adhered to the Joanna Briggs Institute Methodological Guidelines and reporting of this review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) checklist .There was no need for ethical approval because this was a literature research. 2.2.2 Data sources and search strategy Two experienced researchers (YJL and LJJ) independently conducted comprehensive searches for studies on the immune mechanism effects of HM, using the PubMed and CNKI databases from their inception until April 10, 2024. The following keywords were used for the possible mechanisms of immunology: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“immune response” OR “immune dysregulation” OR “immunologic mechanism” OR “Immunological reaction” OR “Immune Processes”). both in English and Chinese. There is no limitation on language or publication type. We also screened the references of included studies to ensure that no eligible studies were missed by the search strategy . 2.2.3 Inclusion and exclusion criteria In our study, two team members (YJL and WJY) collaborated on literature screening. We meticulously planned the full-text screening process. Each researcher independently evaluated abstracts and methodologies to find relevant articles. Selected literature underwent a rigorous full-text review, advancing only if meeting our criteria. We synthesized the screening results and illustrated them with concise flowcharts. In case of disagreements, we discussed until reaching a consensus to ensure the integrity of our decisions. For inclusion in the scoping review, articles must focus on the mechanisms of HM in treating ALS, with a particular emphasis on immune mechanisms. The search will encompass all types of qualitative, quantitative, and mixed-method studies, irrespective of their design, and will not be restricted by language. Publications were excluded if they did not discuss the results of ALS or HM. Additionally, publications focusing solely on WM treatment and not on the mechanism of herbal treatment for ALS were excluded. Finally, publications meeting the aforementioned criteria but reporting non-primary research, such as editorials, letters, concept papers, review articles, unpublished literature, dissertations, books, and book studies, were also excluded. 2.2.4 Data extraction We collected data using specially designed extraction forms. The following information was recorded for each study: (a) author, (b) year of publication,(c) journal, (d) country, (e) type of studies, (f) Involved mechanisms (g)Involved HMs Two researchers performed the data extraction and synthesis processes independently (YJL and LJJ). A third researcher resolved any disagreement. Approach The scoping review adhered to the Joanna Briggs Institute Methodological Guidelines and reporting of this review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) checklist .There was no need for ethical approval because this was a literature research. Data sources and search strategy Two experienced researchers (YJL and LJJ) independently conducted comprehensive searches for studies on the immune mechanism effects of HM, using the PubMed and CNKI databases from their inception until April 10, 2024. The following keywords were used for the possible mechanisms of immunology: (“Chinese herbal medicine” OR “herbal medicine” OR “Traditional Chinese medicine” OR “Chinese Drug” OR “Korean Medicine” OR “East Asia Medicine”) AND (“immune response” OR “immune dysregulation” OR “immunologic mechanism” OR “Immunological reaction” OR “Immune Processes”). both in English and Chinese. There is no limitation on language or publication type. We also screened the references of included studies to ensure that no eligible studies were missed by the search strategy . Inclusion and exclusion criteria In our study, two team members (YJL and WJY) collaborated on literature screening. We meticulously planned the full-text screening process. Each researcher independently evaluated abstracts and methodologies to find relevant articles. Selected literature underwent a rigorous full-text review, advancing only if meeting our criteria. We synthesized the screening results and illustrated them with concise flowcharts. In case of disagreements, we discussed until reaching a consensus to ensure the integrity of our decisions. For inclusion in the scoping review, articles must focus on the mechanisms of HM in treating ALS, with a particular emphasis on immune mechanisms. The search will encompass all types of qualitative, quantitative, and mixed-method studies, irrespective of their design, and will not be restricted by language. Publications were excluded if they did not discuss the results of ALS or HM. Additionally, publications focusing solely on WM treatment and not on the mechanism of herbal treatment for ALS were excluded. Finally, publications meeting the aforementioned criteria but reporting non-primary research, such as editorials, letters, concept papers, review articles, unpublished literature, dissertations, books, and book studies, were also excluded. Data extraction We collected data using specially designed extraction forms. The following information was recorded for each study: (a) author, (b) year of publication,(c) journal, (d) country, (e) type of studies, (f) Involved mechanisms (g)Involved HMs Two researchers performed the data extraction and synthesis processes independently (YJL and LJJ). A third researcher resolved any disagreement. Results 3.1 Systematic analysis 3.1.1 Study inclusion For the preclinical evidence research section, we initially identified 2026 papers through systematic searches across six databases. After removing duplicates, 1718 records persisted. Upon careful examination of titles and abstracts, 1679 articles were excluded for one or more of the following reasons: (1) the article constituted a review, case report, comment, abstract-only, or editorial; (2) the article was not related to animal studies; (3) the article did not focus on research related to ALS; (4) the article did not focus on the therapeutic effects of HM. After thorough examination of the full text of the remaining 39 articles, 21 articles were excluded for one or more of the following reasons: (1) the primary outcome measures were not survival time and onset time, (2) incomplete outcome measure data, (3) primarily cell-based studies, and (4) intervention involving acupuncture at Zusanli (ST36) acupoint . 3.1.2 Characteristics of included studies We ultimately selected 18 studies involving 19 comparisons, comprising 8 Chinese and 10 English publications. One study did two different comparisons because of the way they were treated . All 19 comparisons used SOD1 G93A mice as the experimental group, with each group exclusively receiving HM treatment. Fourteen comparisons included a blank control group, which was regularly administered saline or distilled water. Meanwhile, Six comparisons established a positive control group, five of these used riluzole and one used edaravone as the positive controls. Among these comparisons, seven utilized male animals exclusively , one utilized female animals exclusively , and three used an equal mix of males and females , eight comparisons did not specify the gender of the animals. Among the included comparisons, 6 employed a concentration gradient of HMs . Nine comparisons administered treatments via oral gavage , six via oral administration , two via intraperitoneal injection one via bilateral subcutaneous injection , and one did not specify the method of administration . Ten comparisons simultaneously recorded onset time and survival time as outcome measures For outcome measures, when a concentration gradient was employed, the highest concentration group was recorded . 3.1.3 Study quality and risk of bias 3.1.3.1 CAMARADES The quality scores of the studies ranged from 4 to 8, with a total score of 10. One study received a score of 4 ; two studies received a score of 5 ; four studies received a score of 6 ; seven studies received a score of 7 , and four studies received a score of 8 . All included records were peer-reviewed publications, and all studies utilized appropriate animal models without the use of anesthetics with marked intrinsic properties. Twelve studies mentioned random allocation of animals into treatment and control groups , the methods mentioned in the two studies for specific randomization included random number table and sequential numbering . three studies reported blinded outcome assessment , however, no studies reported sample size calculations. Twelve studies described temperature control , twelve studies reported compliance with animal welfare regulations , and seven studies declared no potential conflicts of interest . 3.1.3.2 SYRCLE The SYRCLE’s ROB is currently the only tool specifically designed for evaluating the internal validity of animal experiments. The risk of bias scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3 ; Five studies received a score of 4 ; Seven studies received a score of 5 ; Three studies received a score of 6 , One studies received a score of 7 . It is developed based on the Cochrane Bias Risk Assessment Tool and are additional items. As shown in , within the 10 items: A. Sequence generation: two studies used the “random number table method” for grouping, rated as “low risk”; Ten studies only mentioned “random” without detailed explanation, and six studies did not mention the grouping method, rated as “uncertain risk” (the quality assessment table still needs modification). B. Baseline characteristics: three studies mentioned baseline comparison of mice, rated as “low risk.” The remaining fifteen studies mentioned only one or more of the age, gender, weight, or species of rats, and did not provide baseline values of relevant outcome indicators in the experiment, hence rated as “uncertain risk.” C. Allocation concealment: two studies mentioned “random” or “random number table,” rated as “low risk”; the remaining sixteen studies did not mention concealment of allocation or the provided information was insufficient to achieve the unpredictability of the random sequence, hence rated as “uncertain risk.” D. Random housing: fifteen studies indicated placing mice in individually housed environments with free access to water, similar temperature, humidity, etc., rated as “low risk.” 3 studies did not mention housing conditions, rated as “uncertain risk.” E. Performance bias (Blinding): All studies did not describe blinding of animal caregivers, researchers, and outcome assessors, hence rated as “uncertain risk.” F. Outcome assessment: three studies mentioned “random” selection of mice for outcome assessment, rated as “low risk”; fifteen studies did not mention it, hence rated as “uncertain risk.” G. Detection bias (Blinding):one study mentioned blinding in evaluating experimental results, rated as “low risk”; seventeen studies did not mention it, hence all rated as “uncertain risk.” H. Incomplete outcome data: one study had missing data during the experiment, but did not provide any explanation on whether the missing data affected the final result’s authenticity, hence rated as “high risk”; four studies only reported the data range, making it impossible to determine if there was data missing, hence rated as “uncertain risk.” I. Selective outcome reporting: All studies did not find incomplete data reporting, rated as “low risk.” J. Other sources of bias: All studies did not find other sources of bias, hence rated as “low risk.” . 3.1.4 Effectiveness 3.1.4.1 Onset time 3.1.4.1.1 Meta analysis Twelve studies were included , with a total sample size of 267animals, including 134 animals treated with HM and 133 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 75%, P < 0. 01. A random-effects model was used. The overall effective rate Standardized Mean Difference(SMD) was 1.75, 95% Confidence Interval(CI) (1.14 ~ 2.36), Z = 5.60, P < 0.01, indicating that HM treatment was effective and superior to the control group (P < 0.01) . 3.1.4.1.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (12 trials with 267 participants) showed that Chinese HM was more beneficial in terms of overall mean reduction in onset time, with no significant heterogeneity between studies . 3.1.4.2 Survival time 3.1.4.2.1 Meta analysis Seventeen studies were included , with a total sample size of 385 animals, including 192 animals treated with HM and 193 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 85%, P < 0.01. A random-effects model was used. The overall effective rate SMD was 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01, indicating that HM treatment was effective and superior to the control group ( P < 0.01) . 3.1.4.2.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (17 trials with 385animals) showed that HM was more beneficial in terms of overall mean reduction in survival time, with no significant heterogeneity between studies . 3.1.4.3 Stride length Four studies were included , with a total sample size of 89 animals, including 44 animals treated with HM and 45 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I ²= 62%, P = 0.05. A random-effects model was used, and further subgroup analysis was conducted, dividing the samples into two groups based on sample size: n > 10 and n < 10. In the n < 10 group, three studies were included, with a total sample size of 41 animals, including 20 animals treated with HM and 21 animals in the control group. The overall effective rate SMD was 2.80, 95% CI (0.90 to 4.70), Z = 2.89, P = 0.004, indicating that HM was effective in treating ALS in the n < 10 group and its efficacy was superior to the control group ( P < 0.05). In the n > 10 group, one study was included , with a total sample size of 48 animals, including 24 animals treated with HM and 24 animals in the control group. The overall effective rate SMD was 1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01, indicating that HM was effective in treating motor neuron diseases compared to the control group in the n > 10 group ( P < 0.01) . 3.1.4.4 Duration time In the 18 studies included, 3 studies assessed the therapeutic effect by calculating the duration of disease in ALS mice, with records of disease duration in both the treatment and control groups. Meta-analysis was conducted on the durations of the two groups to evaluate the efficacy of HM in treating ALS animal models. In the 3 studies, both the treatment and control groups included 25 animals. The heterogeneity test showed P =0.004, I 2 = 82%, indicating statistical significance in heterogeneity between groups. Therefore, a random-effects model was used to combine the effect sizes of disease duration. The results showed that the disease duration in the treatment group was shorter than that in the control group (MD=6.79, 95% CI [-0.28, 13.87]), but the difference between the two groups was not statistically significant ( P =0.06). Regarding the treatment of ALS mice, using recorded disease duration for efficacy evaluation suggests that HM has no effect in delaying the progression of ALS compared to the control group . 3.2 Scoping review 3.2.1 Study inclusion For the scoping review section on the immune mechanisms of HM in the treatment of ALS, a total of 3702 articles were extracted from the initial search carried out in two databases. After removing the duplicates, 3694 articles were selected for the first analysis by title and abstract. The full-text analysis included 75 articles, of which 35 were considered for this scoping review . 3.2.2 Characteristics of included studies This scoping review encompasses a total of 35 studies, with 25 from China, 5 from Korea, 1 from Japan, and 4 from other Western countries. The included studies range from as early as 2005 to as recent as 2023. There are 13 studies on animal experiments, 10 on cell experiments, 3 on clinical trials, and 9 on pharmacological experiments. Five immune modulation pathways are covered, along with three other mechanisms. The literature on blood-brain barrier protection is the most abundant, with 8 studies. In total, 26 different single drugs and compound formulations are involved . 3.2.3 Mechanisms of HM in the treatment of ALS ALS is an incurable neurodegenerative disease that affects the upper and lower motor neurons of the spinal cord, the cerebral cortex, and the spinal cord. The etiology and pathogenesis of ALS remain unknown at present . Here, some studies indicates that HMs show promising potential in combating oxidative stress, excitatory amino acid toxicity, nerve inflammation, and calcium cytotoxicity, offering hope in the treatment of ALS. 3.2.3.1 Excitatory amino acids toxicity In 1957, Lucas and Newhouse’s pioneering research demonstrated the lethal effects of glutamate on neurons in the central nervous system (CNS).Following this, The molecular mechanisms underlying neuronal injury due to excessive glutamate receptor stimulation are starting to be unraveled, indicating that glutamate may exert toxicity on neurons through multiple pathways . The excitotoxicity resulting from the abnormal elevation of extracellular excitatory neurotransmitter glutamate, including the generation of free radicals and lipid superoxide, can induce spontaneous dissolution and degeneration of neurons, contributing to the development of ALS. Numerous experimental findings demonstrate the ability of cryptotanshinone to counter glutamate-induced cytotoxicity and safeguard neurons, indicating its potential utility in mitigating the onset of ALS. The pivotal involvement of the PI3K/Akt signaling pathway in cell survival against glutamate-induced toxicity has been underscored . several Chinese herbs exhibit the capability to inhibit amino acid toxicity and consequently shield neurons. For instance, Acanthopanax extract elevates heme oxygenase (HO)-1 expression, thereby curbing the generation of NO/ROS induced by LPS. Notably, HO-1 expression serves to safeguard cells against glutamate-induced neuronal demise . Additionally, Cryptotanshinone-mediated neuroprotection combats glutamate-induced toxicity by activating the PI3K/Akt pathway and averting the downregulation of Bcl-2 within the anti-apoptotic protein family. Furthermore, Mahesh’s research revealed cryptotanshinone’s capacity to hinder nerve cell apoptosis induced by sodium nitroprusside (SNP), thus exhibiting neuroprotective properties . 3.2.3.2 Oxidative stress Oxidative stress is caused by an imbalance in the production and removal of Reactive oxygen species (ROS) and Reactive nitrogen species (RNS) . Oxidative stress can cause oxidative modification of bioactive molecules such as proteins, lipids, sugars, nucleic acids, etc., so that they lose their original structure and function, affect the normal physiological function of cells, and finally lead to cell degeneration and necrosis. Under normal conditions, free radicals do not cause pathological changes in the body, because the body has enzymes to fight free group damage, such as SOD1, glutathione peroxidase (GSH), catalase (CAT) and non-enzymatic system defense system. Such as non-enzymatic antioxidants carotenoids, tocopherols and vitamin C, as well as free metals and heme-binding proteins. They can suspend the free radical chain reaction, or turn the free radical into a less active substance, so that the production and removal of free radicals are in balance. If the production of free radicals exceeds the body’s ability to remove them, the body will experience oxidative stress . Allicin, the primary compound in garlic oil, has demonstrated its ability to induce phase II enzymes, thereby enhancing antioxidant activity and protecting ALS neurons from oxidative stress . Administering allicin orally with DATS to SOD1-G93A mice at clinical onset induced the expression of HO-1 in the lumbar spinal cord, directly influencing oxidative damage. These findings suggest that oral administration of DATS significantly extends the lifespan of mice . 3.2.3.3 Cytotoxicity of calcium In classical acute excitatory toxicity, the influx of Na+ and Cl- disrupts intracellular Ca2+ homeostasis, triggering a cascade of detrimental biochemical processes. Opening of voltage-gated calcium channels leads to a surge in calcium ions, resulting in excessive release of the excitatory amino acid glutamic acid. This influx of calcium ions through NMDA/AMPA receptors, metabolic glutamic acid receptors, and voltage-dependent calcium channels activates enzymes such as proteases, lipases, kinases, nucleases, and NOS. The generation of free radicals and synthesis of NO further exacerbate neuronal damage, ultimately leading to programmed cell death via apoptosis gene activation. Neuroprotective drugs are thought to primarily act by preventing calcium influx, regulating excitatory amino acid toxicity, and modulating microvascular inflammatory responses. Studies have shown that an extract from the Chinese herb Chuan Xiongqin, can protect nerve cells by lowering intracellular calcium levels and inhibiting glutamate release . Callewaere et al. investigated the protective effect of ligustrazine by stimulating nerve cells with stromal cell-derived factor (SDF-1), which elevates intracellular calcium levels, and then treating them with ligustrazine. They observed a significant decrease in intracellular calcium levels in the ligustrazine-treated group exposed to SDF-1, indicating ligustrazine’s ability to mitigate calcium cytotoxicity and serve as a neuroprotective agent . 3.2.3.4 Other relevant mechanisms Some studies have also found that the incidence of ALS is related to neurotrophic factor deficiency, metal and trace element imbalance, cell apoptosis, viral infection and abnormal neurofilament aggregation. 3.2.4 The possible immunological mechanisms of HM in treating ALS HM has better anti-inflammatory properties and extensive immunomodulatory effects, and can have better curative effect in the diseases of immune system disorders. ALS can be alleviated and treated by protecting the blood-brain barrier, anti-neuroinflammation, inhibiting the activation of complement system, inhibiting natural killer cytotoxicity and regulating T cells . 3.2.4.1 Blood-brain barrier protection Blood central nervous system barrier includes blood-brain barrier and blood-spinal barrier, which can control the transport of substances on both sides of the barrier, thus protecting the relatively stable environment in nervous tissue. It is an important immune barrier for ALS, which can effectively prevent toxic substances from infiltrating into the central nervous system from the blood and pump toxins outward . The target of HM protecting blood brain barrier is closely related to tight junction protein. Tight junction protein is composed of three peripheral cytoplasmic proteins, namely, occludingprotein, closure protein and attachment molecule, and closed small cyclic proteins (ZO-1, ZO-2 and ZO-3).Both borneol and astragaloside can increase the expression of atretic zonule 1 and closure protein 5, and their combination with total notoginseng can also inhibit the downregulation of atretic zonule 1 and occlusion protein, thus significantly improving the permeability of the blood-brain barrier . If borneol was administered with safflower, the expression of MMP-2 and MMP-9 could be decreased, and the expression of ZO-1 and closure protein 5 could be increased . Sijunzi Decoction can increase the expression levels of occlusal protein, ZO-1, closure protein 1 and their mRNA . The expression of von Willeophilia factor in serum, vascular endothelial growth factor, MMP-9 and MMP-2 in brain tissue decreased after the use of Buyang Huanwu Decoction, indicating the protective effect of the Blood-brain barrier . Ginsenoside Rg1 can also up-regulate the levels of atretic zonolin-1 and occlusion-protein, and down-regulate the expression of matrix metalloproteinase-2 and matrix metalloproteinase-9 to restore the integrity of the Blood-brain barrier . Breviscapine mainly plays its role by up-regulating the expression of CD63 and the blood-brain barrier tight junction proteins claudin5, occludin and ZO-1 . Jiweiling related preparations such as Jiweiling lyophilized powder can significantly improve blood-brain barrier score, delay neuronal edema and reduce blood-brain barrier permeability in damaged mice . By regulating the permeability of the blood-brain barrier, the active components of Chinese medicine can restore the integrity of the immune barrier in ALS and enhance its self-protection. 3.2.4.2 Regulates microglia Neuroinflammation is an important host defense mechanism that protects the brain from infection or injury and restores normal structure and function , chronic inflammation can induce cytotoxicity and worsen the severity of different neurodegenerative diseases, such as Parkinson’s disease (PD) , multiple sclerosis (MS) , and ALS . Dysregulation of the inflammatory response, characterized by abnormal activation of microglia and overabundance of pro-inflammatory cytokines, leads to the neurodegeneration observed in ALS . Chinese HM has rich historical background, remarkable curative effect and minimal adverse reactions. It acts by regulating microglia activation and polarization, inhibiting inflammatory responses, and by mediating microglia and various related pathways such as NF-κB signaling pathway, Toll-like signaling pathway, Notch signaling pathway, AMPK signaling pathway, MAPK signaling pathway, MAPK signaling pathway, etc .Tripterygium wilfordis extract (Tripterygium wilfordis) can regulate the phosphorylation of kinase 1/2 and nuclear factor kB by blocking extracellular signals to reduce the production of pro-inflammatory factors and nitric oxide, so as to inhibit the anti-inflammatory effect of autoimmune . As resident macrophages of the central nervous system, microglia play a key role in maintaining brain homeostasis , but if microglia are over-activated, it will lead to the release of many pro-inflammatory factors and neurotoxic substances, aggravating the damage. Melittin, an effective component extracted from bee venom, can directly reduce the activity of microglia or indirectly reduce the secretion of inflammatory factors and the phosphorylation level of P38 mitogen-activated protein kinase in brain stem and spinal cord, significantly regulate the inflammation of ALS mice and delay the development of the disease . The extract can inhibit C-JUN amino-terminal kinase signaling pathway, reduce the expression of microglia-induced nitric oxide synthase and cyclooxygenase-2, and play an anti-inflammatory role . The up-regulated expression of TLR4 is a key receptor involved in the activation and function of microglia. The active ingredient Geniposide (GEN) in Gardenia can effectively reduce the expression of TLR4, MyD88, p-IκB, NF-κB, p-ERK1/2 and p38 proteins, thereby playing an anti-inflammatory role and inhibiting the activation of microglia by down-regulating the TLR4-dependent pathway of MyD88 . The triterpenoid saponin compound polyhemigosaponin F (PGSF) extracted from melon seed gold of Polygala can effectively counteract the up-regulation of toll-like receptor 4 (TLR4) in microglia, and down-regulate the expression of nitric oxide synthase (iNOS) and circular cell enzymase-2 (COX-2) induced by cellular inflammatory proteases. Improve the overactivation of microglia and the production of neurotoxic factors, and reduce the damage of nerve cells . KCHO-1, a natural ethanol extract obtained from turmeric, salvia miltiorrhizae, Tianma, papaya and other herbs, can reduce oxidative stress by decreasing gp91phox subtype expression of NADPH oxidase, down-regulating the level of induced-nitro oxide synthase, and alleviating the phosphorylation of P38 mitogen-activated protein kinase and the activation of extracellular signal-regulated kinase 1/2. Inhibit microglia proliferation and activation . Astragaloside IV, total saponins and baicalin can regulate microglia polarization and improve brain tissue inflammation by mediating MAPK signaling pathway. Calycosin has the ability to reduce TNF-α-containing microglia populations by activating the BDNF/TrkB signaling pathway, thereby reducing inflammation and neuronal damage . 3.2.4.3 Inhibition of complement system activation The complement system is a group of activated enzymatically active proteins in human serum and interstitial fluid. It is composed of more than 30 kinds of soluble proteins and membrane binding proteins synthesized by the liver. The complement consists of nine components, named C1, C2, C3,… and C9 . Normal activation of complement is beneficial to enhance immunity, but excessive activation can cause inflammation, tissue damage and various immune hemolysis reactions . Complement activation has long been implicated in the pathogenesis of ALS, and many clinical and animal studies have shown that complement factors, including C1q and C3, have a strong upregulation in the dead regions of motor neurons . HM has a complex mechanism of action and a wide range of effects. According to research, polysaccharides in natural HM are important components of HM to regulate complement activity . C1r, C1s, C3, C4 were the main targets of the crude polysaccharide extract of S. mellowsis, which inhibited the activity of the complement system . The polysaccharide component PsEULl3 of Eucommia ulmoides has very high anti-complement activity against the classical pathway . The APS-2 glucomglycan isolated and purified from the plant has good anti-complement activity, and its targets are Clq, C2, C3, C5 and C9 . Lentinan can decompose complement C3 into allergen C3a, the mechanism of which may be that the recognition spot on complement protein can recognize the structure of polysaccharide and activate the complement system. The main targets of quercetin 7,3’,4’ -trimethyl ether from patchouli were C1q, C2, C5 and C9 . We often know that Chinese herbs often act on the classical pathway of complement to inhibit the activity of the complement system, but in addition, the active ingredients of Chinese medicines can also act on the side pathway. Among them, C1q, C2, C4 and C9 are the main targets of the charming chun components extracted from Knotweed , The extract components can be adjusted by acting on different complement targets or multiple targets together . The -catechin-3-0-b-D-1 (2-cinnamyl) -glucoside isolated and identified from the Chinese herb Anagardia showed different degrees of inhibition on the classical and bypass pathways of the complement system . 3.2.4.4 Regulates T lymphocytes In the central nervous system, CD4+T cells are believed to have neuroprotective effects, which can promote neuroprotection of glial cells and delay the progression of diseases by changing the morphology of glial cells . CD4+T cells can differentiate into regulatory and effector T cells, with the former regulating the proliferation of the latter. In the T-cell-mediated immune response, the imbalance of effector and regulatory T cells will trigger neuroinflammation and eventually lead to neuronal degeneration and necrosis . Regulatory T cells are responsible for regulating the immune response of the body, and usually play an important role in maintaining self-tolerance and avoiding excessive immune response damage to the body. In the early stage of ALS, the level of regulatory T cells increases and thus plays an anti-inflammatory role in the central nervous system, while in the late stage of rapid progression, the level of regulatory T cells decreases and shows the deterioration of neuroinflammation. Effector T cells are cells that proliferate and differentiate after T cells receive antigen stimulation, and have the function of releasing lymphocytes and can actively respond to stimulation. Observation of patients with ALS also showed that increased effector T cells in blood and cerebrospinal fluid were associated with decreased survival, while increased regulatory T cells in blood were associated with improved survival . Trichosanthin extracted from trichosanthes trichosanthes can directly increase the number of regulatory T cells, the level of immune index interleukin-10 and induce higher expression of forked head spiral transcription factor 3, thereby enhancing the immune regulatory ability of regulatory T cells . Dichloromethylamine, a soluble component of wheat zhai, can change AKT phosphorylation signaling, reduce the differentiation of helper T cells 17, and induce the proliferation of regulatory T cells to maintain balance . Zuogui pill can also up-regulate the immune index interleukin 10, so as to improve the immune response function of regulatory T cells . 3.2.4.5 Regulates natural killer cells Natural killer cells are key components of adaptive immunity and active cells with significant toxicity. One study showed an increase in NK cells in the blood of ALS patients compared to controls . At present, there are few studies on the relationship between NK cells and the occurrence and development of ALS, and the relationship between the two is unclear. However, infiltration of NK cells and increased expression of NKG2D ligands on MN were found in the motor cortex and spinal cord of deceased ALS patients, and NK cells were toxic to MN expressing NKG2D ligands. NK cells also secreted IFN-γ to activate microglia and damage regulatory T cells in the spinal cord of mSOD1 mice . These results suggest that NK cells may affect the occurrence and development of ALS through multiple immune mechanisms. Futenge mixture, which is composed of Futenge, Astragalus and red ginseng, can up-regulate CD2, CD95, PD-1 receptors and activated molecules on the surface of natural killer cells, and play an excellent regulatory role on natural killer cells . All the above studies have shown that by influencing related proteins such as atretic zonule 1 and atretic protein 5, reducing the production of pro-inflammatory factors and nitric oxide, acting on complement targets, and upregating cell surface receptor molecules, HM can regulate the stability of the blood-brain barrier, inhibit the activation of microglia and complement system, weaken the toxicity of natural killer cells, and regulate the function of T cells. Thereby alleviating and treating ALS at the level of immune function . Systematic analysis 3.1.1 Study inclusion For the preclinical evidence research section, we initially identified 2026 papers through systematic searches across six databases. After removing duplicates, 1718 records persisted. Upon careful examination of titles and abstracts, 1679 articles were excluded for one or more of the following reasons: (1) the article constituted a review, case report, comment, abstract-only, or editorial; (2) the article was not related to animal studies; (3) the article did not focus on research related to ALS; (4) the article did not focus on the therapeutic effects of HM. After thorough examination of the full text of the remaining 39 articles, 21 articles were excluded for one or more of the following reasons: (1) the primary outcome measures were not survival time and onset time, (2) incomplete outcome measure data, (3) primarily cell-based studies, and (4) intervention involving acupuncture at Zusanli (ST36) acupoint . 3.1.2 Characteristics of included studies We ultimately selected 18 studies involving 19 comparisons, comprising 8 Chinese and 10 English publications. One study did two different comparisons because of the way they were treated . All 19 comparisons used SOD1 G93A mice as the experimental group, with each group exclusively receiving HM treatment. Fourteen comparisons included a blank control group, which was regularly administered saline or distilled water. Meanwhile, Six comparisons established a positive control group, five of these used riluzole and one used edaravone as the positive controls. Among these comparisons, seven utilized male animals exclusively , one utilized female animals exclusively , and three used an equal mix of males and females , eight comparisons did not specify the gender of the animals. Among the included comparisons, 6 employed a concentration gradient of HMs . Nine comparisons administered treatments via oral gavage , six via oral administration , two via intraperitoneal injection one via bilateral subcutaneous injection , and one did not specify the method of administration . Ten comparisons simultaneously recorded onset time and survival time as outcome measures For outcome measures, when a concentration gradient was employed, the highest concentration group was recorded . 3.1.3 Study quality and risk of bias 3.1.3.1 CAMARADES The quality scores of the studies ranged from 4 to 8, with a total score of 10. One study received a score of 4 ; two studies received a score of 5 ; four studies received a score of 6 ; seven studies received a score of 7 , and four studies received a score of 8 . All included records were peer-reviewed publications, and all studies utilized appropriate animal models without the use of anesthetics with marked intrinsic properties. Twelve studies mentioned random allocation of animals into treatment and control groups , the methods mentioned in the two studies for specific randomization included random number table and sequential numbering . three studies reported blinded outcome assessment , however, no studies reported sample size calculations. Twelve studies described temperature control , twelve studies reported compliance with animal welfare regulations , and seven studies declared no potential conflicts of interest . 3.1.3.2 SYRCLE The SYRCLE’s ROB is currently the only tool specifically designed for evaluating the internal validity of animal experiments. The risk of bias scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3 ; Five studies received a score of 4 ; Seven studies received a score of 5 ; Three studies received a score of 6 , One studies received a score of 7 . It is developed based on the Cochrane Bias Risk Assessment Tool and are additional items. As shown in , within the 10 items: A. Sequence generation: two studies used the “random number table method” for grouping, rated as “low risk”; Ten studies only mentioned “random” without detailed explanation, and six studies did not mention the grouping method, rated as “uncertain risk” (the quality assessment table still needs modification). B. Baseline characteristics: three studies mentioned baseline comparison of mice, rated as “low risk.” The remaining fifteen studies mentioned only one or more of the age, gender, weight, or species of rats, and did not provide baseline values of relevant outcome indicators in the experiment, hence rated as “uncertain risk.” C. Allocation concealment: two studies mentioned “random” or “random number table,” rated as “low risk”; the remaining sixteen studies did not mention concealment of allocation or the provided information was insufficient to achieve the unpredictability of the random sequence, hence rated as “uncertain risk.” D. Random housing: fifteen studies indicated placing mice in individually housed environments with free access to water, similar temperature, humidity, etc., rated as “low risk.” 3 studies did not mention housing conditions, rated as “uncertain risk.” E. Performance bias (Blinding): All studies did not describe blinding of animal caregivers, researchers, and outcome assessors, hence rated as “uncertain risk.” F. Outcome assessment: three studies mentioned “random” selection of mice for outcome assessment, rated as “low risk”; fifteen studies did not mention it, hence rated as “uncertain risk.” G. Detection bias (Blinding):one study mentioned blinding in evaluating experimental results, rated as “low risk”; seventeen studies did not mention it, hence all rated as “uncertain risk.” H. Incomplete outcome data: one study had missing data during the experiment, but did not provide any explanation on whether the missing data affected the final result’s authenticity, hence rated as “high risk”; four studies only reported the data range, making it impossible to determine if there was data missing, hence rated as “uncertain risk.” I. Selective outcome reporting: All studies did not find incomplete data reporting, rated as “low risk.” J. Other sources of bias: All studies did not find other sources of bias, hence rated as “low risk.” . 3.1.4 Effectiveness 3.1.4.1 Onset time 3.1.4.1.1 Meta analysis Twelve studies were included , with a total sample size of 267animals, including 134 animals treated with HM and 133 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 75%, P < 0. 01. A random-effects model was used. The overall effective rate Standardized Mean Difference(SMD) was 1.75, 95% Confidence Interval(CI) (1.14 ~ 2.36), Z = 5.60, P < 0.01, indicating that HM treatment was effective and superior to the control group (P < 0.01) . 3.1.4.1.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (12 trials with 267 participants) showed that Chinese HM was more beneficial in terms of overall mean reduction in onset time, with no significant heterogeneity between studies . 3.1.4.2 Survival time 3.1.4.2.1 Meta analysis Seventeen studies were included , with a total sample size of 385 animals, including 192 animals treated with HM and 193 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 85%, P < 0.01. A random-effects model was used. The overall effective rate SMD was 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01, indicating that HM treatment was effective and superior to the control group ( P < 0.01) . 3.1.4.2.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (17 trials with 385animals) showed that HM was more beneficial in terms of overall mean reduction in survival time, with no significant heterogeneity between studies . 3.1.4.3 Stride length Four studies were included , with a total sample size of 89 animals, including 44 animals treated with HM and 45 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I ²= 62%, P = 0.05. A random-effects model was used, and further subgroup analysis was conducted, dividing the samples into two groups based on sample size: n > 10 and n < 10. In the n < 10 group, three studies were included, with a total sample size of 41 animals, including 20 animals treated with HM and 21 animals in the control group. The overall effective rate SMD was 2.80, 95% CI (0.90 to 4.70), Z = 2.89, P = 0.004, indicating that HM was effective in treating ALS in the n < 10 group and its efficacy was superior to the control group ( P < 0.05). In the n > 10 group, one study was included , with a total sample size of 48 animals, including 24 animals treated with HM and 24 animals in the control group. The overall effective rate SMD was 1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01, indicating that HM was effective in treating motor neuron diseases compared to the control group in the n > 10 group ( P < 0.01) . 3.1.4.4 Duration time In the 18 studies included, 3 studies assessed the therapeutic effect by calculating the duration of disease in ALS mice, with records of disease duration in both the treatment and control groups. Meta-analysis was conducted on the durations of the two groups to evaluate the efficacy of HM in treating ALS animal models. In the 3 studies, both the treatment and control groups included 25 animals. The heterogeneity test showed P =0.004, I 2 = 82%, indicating statistical significance in heterogeneity between groups. Therefore, a random-effects model was used to combine the effect sizes of disease duration. The results showed that the disease duration in the treatment group was shorter than that in the control group (MD=6.79, 95% CI [-0.28, 13.87]), but the difference between the two groups was not statistically significant ( P =0.06). Regarding the treatment of ALS mice, using recorded disease duration for efficacy evaluation suggests that HM has no effect in delaying the progression of ALS compared to the control group . Study inclusion For the preclinical evidence research section, we initially identified 2026 papers through systematic searches across six databases. After removing duplicates, 1718 records persisted. Upon careful examination of titles and abstracts, 1679 articles were excluded for one or more of the following reasons: (1) the article constituted a review, case report, comment, abstract-only, or editorial; (2) the article was not related to animal studies; (3) the article did not focus on research related to ALS; (4) the article did not focus on the therapeutic effects of HM. After thorough examination of the full text of the remaining 39 articles, 21 articles were excluded for one or more of the following reasons: (1) the primary outcome measures were not survival time and onset time, (2) incomplete outcome measure data, (3) primarily cell-based studies, and (4) intervention involving acupuncture at Zusanli (ST36) acupoint . Characteristics of included studies We ultimately selected 18 studies involving 19 comparisons, comprising 8 Chinese and 10 English publications. One study did two different comparisons because of the way they were treated . All 19 comparisons used SOD1 G93A mice as the experimental group, with each group exclusively receiving HM treatment. Fourteen comparisons included a blank control group, which was regularly administered saline or distilled water. Meanwhile, Six comparisons established a positive control group, five of these used riluzole and one used edaravone as the positive controls. Among these comparisons, seven utilized male animals exclusively , one utilized female animals exclusively , and three used an equal mix of males and females , eight comparisons did not specify the gender of the animals. Among the included comparisons, 6 employed a concentration gradient of HMs . Nine comparisons administered treatments via oral gavage , six via oral administration , two via intraperitoneal injection one via bilateral subcutaneous injection , and one did not specify the method of administration . Ten comparisons simultaneously recorded onset time and survival time as outcome measures For outcome measures, when a concentration gradient was employed, the highest concentration group was recorded . Study quality and risk of bias 3.1.3.1 CAMARADES The quality scores of the studies ranged from 4 to 8, with a total score of 10. One study received a score of 4 ; two studies received a score of 5 ; four studies received a score of 6 ; seven studies received a score of 7 , and four studies received a score of 8 . All included records were peer-reviewed publications, and all studies utilized appropriate animal models without the use of anesthetics with marked intrinsic properties. Twelve studies mentioned random allocation of animals into treatment and control groups , the methods mentioned in the two studies for specific randomization included random number table and sequential numbering . three studies reported blinded outcome assessment , however, no studies reported sample size calculations. Twelve studies described temperature control , twelve studies reported compliance with animal welfare regulations , and seven studies declared no potential conflicts of interest . 3.1.3.2 SYRCLE The SYRCLE’s ROB is currently the only tool specifically designed for evaluating the internal validity of animal experiments. The risk of bias scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3 ; Five studies received a score of 4 ; Seven studies received a score of 5 ; Three studies received a score of 6 , One studies received a score of 7 . It is developed based on the Cochrane Bias Risk Assessment Tool and are additional items. As shown in , within the 10 items: A. Sequence generation: two studies used the “random number table method” for grouping, rated as “low risk”; Ten studies only mentioned “random” without detailed explanation, and six studies did not mention the grouping method, rated as “uncertain risk” (the quality assessment table still needs modification). B. Baseline characteristics: three studies mentioned baseline comparison of mice, rated as “low risk.” The remaining fifteen studies mentioned only one or more of the age, gender, weight, or species of rats, and did not provide baseline values of relevant outcome indicators in the experiment, hence rated as “uncertain risk.” C. Allocation concealment: two studies mentioned “random” or “random number table,” rated as “low risk”; the remaining sixteen studies did not mention concealment of allocation or the provided information was insufficient to achieve the unpredictability of the random sequence, hence rated as “uncertain risk.” D. Random housing: fifteen studies indicated placing mice in individually housed environments with free access to water, similar temperature, humidity, etc., rated as “low risk.” 3 studies did not mention housing conditions, rated as “uncertain risk.” E. Performance bias (Blinding): All studies did not describe blinding of animal caregivers, researchers, and outcome assessors, hence rated as “uncertain risk.” F. Outcome assessment: three studies mentioned “random” selection of mice for outcome assessment, rated as “low risk”; fifteen studies did not mention it, hence rated as “uncertain risk.” G. Detection bias (Blinding):one study mentioned blinding in evaluating experimental results, rated as “low risk”; seventeen studies did not mention it, hence all rated as “uncertain risk.” H. Incomplete outcome data: one study had missing data during the experiment, but did not provide any explanation on whether the missing data affected the final result’s authenticity, hence rated as “high risk”; four studies only reported the data range, making it impossible to determine if there was data missing, hence rated as “uncertain risk.” I. Selective outcome reporting: All studies did not find incomplete data reporting, rated as “low risk.” J. Other sources of bias: All studies did not find other sources of bias, hence rated as “low risk.” . CAMARADES The quality scores of the studies ranged from 4 to 8, with a total score of 10. One study received a score of 4 ; two studies received a score of 5 ; four studies received a score of 6 ; seven studies received a score of 7 , and four studies received a score of 8 . All included records were peer-reviewed publications, and all studies utilized appropriate animal models without the use of anesthetics with marked intrinsic properties. Twelve studies mentioned random allocation of animals into treatment and control groups , the methods mentioned in the two studies for specific randomization included random number table and sequential numbering . three studies reported blinded outcome assessment , however, no studies reported sample size calculations. Twelve studies described temperature control , twelve studies reported compliance with animal welfare regulations , and seven studies declared no potential conflicts of interest . SYRCLE The SYRCLE’s ROB is currently the only tool specifically designed for evaluating the internal validity of animal experiments. The risk of bias scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3 ; Five studies received a score of 4 ; Seven studies received a score of 5 ; Three studies received a score of 6 , One studies received a score of 7 . It is developed based on the Cochrane Bias Risk Assessment Tool and are additional items. As shown in , within the 10 items: A. Sequence generation: two studies used the “random number table method” for grouping, rated as “low risk”; Ten studies only mentioned “random” without detailed explanation, and six studies did not mention the grouping method, rated as “uncertain risk” (the quality assessment table still needs modification). B. Baseline characteristics: three studies mentioned baseline comparison of mice, rated as “low risk.” The remaining fifteen studies mentioned only one or more of the age, gender, weight, or species of rats, and did not provide baseline values of relevant outcome indicators in the experiment, hence rated as “uncertain risk.” C. Allocation concealment: two studies mentioned “random” or “random number table,” rated as “low risk”; the remaining sixteen studies did not mention concealment of allocation or the provided information was insufficient to achieve the unpredictability of the random sequence, hence rated as “uncertain risk.” D. Random housing: fifteen studies indicated placing mice in individually housed environments with free access to water, similar temperature, humidity, etc., rated as “low risk.” 3 studies did not mention housing conditions, rated as “uncertain risk.” E. Performance bias (Blinding): All studies did not describe blinding of animal caregivers, researchers, and outcome assessors, hence rated as “uncertain risk.” F. Outcome assessment: three studies mentioned “random” selection of mice for outcome assessment, rated as “low risk”; fifteen studies did not mention it, hence rated as “uncertain risk.” G. Detection bias (Blinding):one study mentioned blinding in evaluating experimental results, rated as “low risk”; seventeen studies did not mention it, hence all rated as “uncertain risk.” H. Incomplete outcome data: one study had missing data during the experiment, but did not provide any explanation on whether the missing data affected the final result’s authenticity, hence rated as “high risk”; four studies only reported the data range, making it impossible to determine if there was data missing, hence rated as “uncertain risk.” I. Selective outcome reporting: All studies did not find incomplete data reporting, rated as “low risk.” J. Other sources of bias: All studies did not find other sources of bias, hence rated as “low risk.” . Effectiveness 3.1.4.1 Onset time 3.1.4.1.1 Meta analysis Twelve studies were included , with a total sample size of 267animals, including 134 animals treated with HM and 133 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 75%, P < 0. 01. A random-effects model was used. The overall effective rate Standardized Mean Difference(SMD) was 1.75, 95% Confidence Interval(CI) (1.14 ~ 2.36), Z = 5.60, P < 0.01, indicating that HM treatment was effective and superior to the control group (P < 0.01) . 3.1.4.1.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (12 trials with 267 participants) showed that Chinese HM was more beneficial in terms of overall mean reduction in onset time, with no significant heterogeneity between studies . 3.1.4.2 Survival time 3.1.4.2.1 Meta analysis Seventeen studies were included , with a total sample size of 385 animals, including 192 animals treated with HM and 193 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 85%, P < 0.01. A random-effects model was used. The overall effective rate SMD was 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01, indicating that HM treatment was effective and superior to the control group ( P < 0.01) . 3.1.4.2.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (17 trials with 385animals) showed that HM was more beneficial in terms of overall mean reduction in survival time, with no significant heterogeneity between studies . 3.1.4.3 Stride length Four studies were included , with a total sample size of 89 animals, including 44 animals treated with HM and 45 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I ²= 62%, P = 0.05. A random-effects model was used, and further subgroup analysis was conducted, dividing the samples into two groups based on sample size: n > 10 and n < 10. In the n < 10 group, three studies were included, with a total sample size of 41 animals, including 20 animals treated with HM and 21 animals in the control group. The overall effective rate SMD was 2.80, 95% CI (0.90 to 4.70), Z = 2.89, P = 0.004, indicating that HM was effective in treating ALS in the n < 10 group and its efficacy was superior to the control group ( P < 0.05). In the n > 10 group, one study was included , with a total sample size of 48 animals, including 24 animals treated with HM and 24 animals in the control group. The overall effective rate SMD was 1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01, indicating that HM was effective in treating motor neuron diseases compared to the control group in the n > 10 group ( P < 0.01) . 3.1.4.4 Duration time In the 18 studies included, 3 studies assessed the therapeutic effect by calculating the duration of disease in ALS mice, with records of disease duration in both the treatment and control groups. Meta-analysis was conducted on the durations of the two groups to evaluate the efficacy of HM in treating ALS animal models. In the 3 studies, both the treatment and control groups included 25 animals. The heterogeneity test showed P =0.004, I 2 = 82%, indicating statistical significance in heterogeneity between groups. Therefore, a random-effects model was used to combine the effect sizes of disease duration. The results showed that the disease duration in the treatment group was shorter than that in the control group (MD=6.79, 95% CI [-0.28, 13.87]), but the difference between the two groups was not statistically significant ( P =0.06). Regarding the treatment of ALS mice, using recorded disease duration for efficacy evaluation suggests that HM has no effect in delaying the progression of ALS compared to the control group . Onset time 3.1.4.1.1 Meta analysis Twelve studies were included , with a total sample size of 267animals, including 134 animals treated with HM and 133 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 75%, P < 0. 01. A random-effects model was used. The overall effective rate Standardized Mean Difference(SMD) was 1.75, 95% Confidence Interval(CI) (1.14 ~ 2.36), Z = 5.60, P < 0.01, indicating that HM treatment was effective and superior to the control group (P < 0.01) . 3.1.4.1.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (12 trials with 267 participants) showed that Chinese HM was more beneficial in terms of overall mean reduction in onset time, with no significant heterogeneity between studies . Meta analysis Twelve studies were included , with a total sample size of 267animals, including 134 animals treated with HM and 133 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 75%, P < 0. 01. A random-effects model was used. The overall effective rate Standardized Mean Difference(SMD) was 1.75, 95% Confidence Interval(CI) (1.14 ~ 2.36), Z = 5.60, P < 0.01, indicating that HM treatment was effective and superior to the control group (P < 0.01) . Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (12 trials with 267 participants) showed that Chinese HM was more beneficial in terms of overall mean reduction in onset time, with no significant heterogeneity between studies . Survival time 3.1.4.2.1 Meta analysis Seventeen studies were included , with a total sample size of 385 animals, including 192 animals treated with HM and 193 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 85%, P < 0.01. A random-effects model was used. The overall effective rate SMD was 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01, indicating that HM treatment was effective and superior to the control group ( P < 0.01) . 3.1.4.2.2 Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (17 trials with 385animals) showed that HM was more beneficial in terms of overall mean reduction in survival time, with no significant heterogeneity between studies . Meta analysis Seventeen studies were included , with a total sample size of 385 animals, including 192 animals treated with HM and 193 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I² = 85%, P < 0.01. A random-effects model was used. The overall effective rate SMD was 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01, indicating that HM treatment was effective and superior to the control group ( P < 0.01) . Sensitivity analysis Further sensitivity analyses comparing HM with conventional feeding regimens (17 trials with 385animals) showed that HM was more beneficial in terms of overall mean reduction in survival time, with no significant heterogeneity between studies . Stride length Four studies were included , with a total sample size of 89 animals, including 44 animals treated with HM and 45 animals in the control group, with individual study sample sizes ranging from 5 to 25. Heterogeneity test results showed I ²= 62%, P = 0.05. A random-effects model was used, and further subgroup analysis was conducted, dividing the samples into two groups based on sample size: n > 10 and n < 10. In the n < 10 group, three studies were included, with a total sample size of 41 animals, including 20 animals treated with HM and 21 animals in the control group. The overall effective rate SMD was 2.80, 95% CI (0.90 to 4.70), Z = 2.89, P = 0.004, indicating that HM was effective in treating ALS in the n < 10 group and its efficacy was superior to the control group ( P < 0.05). In the n > 10 group, one study was included , with a total sample size of 48 animals, including 24 animals treated with HM and 24 animals in the control group. The overall effective rate SMD was 1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01, indicating that HM was effective in treating motor neuron diseases compared to the control group in the n > 10 group ( P < 0.01) . Duration time In the 18 studies included, 3 studies assessed the therapeutic effect by calculating the duration of disease in ALS mice, with records of disease duration in both the treatment and control groups. Meta-analysis was conducted on the durations of the two groups to evaluate the efficacy of HM in treating ALS animal models. In the 3 studies, both the treatment and control groups included 25 animals. The heterogeneity test showed P =0.004, I 2 = 82%, indicating statistical significance in heterogeneity between groups. Therefore, a random-effects model was used to combine the effect sizes of disease duration. The results showed that the disease duration in the treatment group was shorter than that in the control group (MD=6.79, 95% CI [-0.28, 13.87]), but the difference between the two groups was not statistically significant ( P =0.06). Regarding the treatment of ALS mice, using recorded disease duration for efficacy evaluation suggests that HM has no effect in delaying the progression of ALS compared to the control group . Scoping review 3.2.1 Study inclusion For the scoping review section on the immune mechanisms of HM in the treatment of ALS, a total of 3702 articles were extracted from the initial search carried out in two databases. After removing the duplicates, 3694 articles were selected for the first analysis by title and abstract. The full-text analysis included 75 articles, of which 35 were considered for this scoping review . 3.2.2 Characteristics of included studies This scoping review encompasses a total of 35 studies, with 25 from China, 5 from Korea, 1 from Japan, and 4 from other Western countries. The included studies range from as early as 2005 to as recent as 2023. There are 13 studies on animal experiments, 10 on cell experiments, 3 on clinical trials, and 9 on pharmacological experiments. Five immune modulation pathways are covered, along with three other mechanisms. The literature on blood-brain barrier protection is the most abundant, with 8 studies. In total, 26 different single drugs and compound formulations are involved . 3.2.3 Mechanisms of HM in the treatment of ALS ALS is an incurable neurodegenerative disease that affects the upper and lower motor neurons of the spinal cord, the cerebral cortex, and the spinal cord. The etiology and pathogenesis of ALS remain unknown at present . Here, some studies indicates that HMs show promising potential in combating oxidative stress, excitatory amino acid toxicity, nerve inflammation, and calcium cytotoxicity, offering hope in the treatment of ALS. 3.2.3.1 Excitatory amino acids toxicity In 1957, Lucas and Newhouse’s pioneering research demonstrated the lethal effects of glutamate on neurons in the central nervous system (CNS).Following this, The molecular mechanisms underlying neuronal injury due to excessive glutamate receptor stimulation are starting to be unraveled, indicating that glutamate may exert toxicity on neurons through multiple pathways . The excitotoxicity resulting from the abnormal elevation of extracellular excitatory neurotransmitter glutamate, including the generation of free radicals and lipid superoxide, can induce spontaneous dissolution and degeneration of neurons, contributing to the development of ALS. Numerous experimental findings demonstrate the ability of cryptotanshinone to counter glutamate-induced cytotoxicity and safeguard neurons, indicating its potential utility in mitigating the onset of ALS. The pivotal involvement of the PI3K/Akt signaling pathway in cell survival against glutamate-induced toxicity has been underscored . several Chinese herbs exhibit the capability to inhibit amino acid toxicity and consequently shield neurons. For instance, Acanthopanax extract elevates heme oxygenase (HO)-1 expression, thereby curbing the generation of NO/ROS induced by LPS. Notably, HO-1 expression serves to safeguard cells against glutamate-induced neuronal demise . Additionally, Cryptotanshinone-mediated neuroprotection combats glutamate-induced toxicity by activating the PI3K/Akt pathway and averting the downregulation of Bcl-2 within the anti-apoptotic protein family. Furthermore, Mahesh’s research revealed cryptotanshinone’s capacity to hinder nerve cell apoptosis induced by sodium nitroprusside (SNP), thus exhibiting neuroprotective properties . 3.2.3.2 Oxidative stress Oxidative stress is caused by an imbalance in the production and removal of Reactive oxygen species (ROS) and Reactive nitrogen species (RNS) . Oxidative stress can cause oxidative modification of bioactive molecules such as proteins, lipids, sugars, nucleic acids, etc., so that they lose their original structure and function, affect the normal physiological function of cells, and finally lead to cell degeneration and necrosis. Under normal conditions, free radicals do not cause pathological changes in the body, because the body has enzymes to fight free group damage, such as SOD1, glutathione peroxidase (GSH), catalase (CAT) and non-enzymatic system defense system. Such as non-enzymatic antioxidants carotenoids, tocopherols and vitamin C, as well as free metals and heme-binding proteins. They can suspend the free radical chain reaction, or turn the free radical into a less active substance, so that the production and removal of free radicals are in balance. If the production of free radicals exceeds the body’s ability to remove them, the body will experience oxidative stress . Allicin, the primary compound in garlic oil, has demonstrated its ability to induce phase II enzymes, thereby enhancing antioxidant activity and protecting ALS neurons from oxidative stress . Administering allicin orally with DATS to SOD1-G93A mice at clinical onset induced the expression of HO-1 in the lumbar spinal cord, directly influencing oxidative damage. These findings suggest that oral administration of DATS significantly extends the lifespan of mice . 3.2.3.3 Cytotoxicity of calcium In classical acute excitatory toxicity, the influx of Na+ and Cl- disrupts intracellular Ca2+ homeostasis, triggering a cascade of detrimental biochemical processes. Opening of voltage-gated calcium channels leads to a surge in calcium ions, resulting in excessive release of the excitatory amino acid glutamic acid. This influx of calcium ions through NMDA/AMPA receptors, metabolic glutamic acid receptors, and voltage-dependent calcium channels activates enzymes such as proteases, lipases, kinases, nucleases, and NOS. The generation of free radicals and synthesis of NO further exacerbate neuronal damage, ultimately leading to programmed cell death via apoptosis gene activation. Neuroprotective drugs are thought to primarily act by preventing calcium influx, regulating excitatory amino acid toxicity, and modulating microvascular inflammatory responses. Studies have shown that an extract from the Chinese herb Chuan Xiongqin, can protect nerve cells by lowering intracellular calcium levels and inhibiting glutamate release . Callewaere et al. investigated the protective effect of ligustrazine by stimulating nerve cells with stromal cell-derived factor (SDF-1), which elevates intracellular calcium levels, and then treating them with ligustrazine. They observed a significant decrease in intracellular calcium levels in the ligustrazine-treated group exposed to SDF-1, indicating ligustrazine’s ability to mitigate calcium cytotoxicity and serve as a neuroprotective agent . 3.2.3.4 Other relevant mechanisms Some studies have also found that the incidence of ALS is related to neurotrophic factor deficiency, metal and trace element imbalance, cell apoptosis, viral infection and abnormal neurofilament aggregation. 3.2.4 The possible immunological mechanisms of HM in treating ALS HM has better anti-inflammatory properties and extensive immunomodulatory effects, and can have better curative effect in the diseases of immune system disorders. ALS can be alleviated and treated by protecting the blood-brain barrier, anti-neuroinflammation, inhibiting the activation of complement system, inhibiting natural killer cytotoxicity and regulating T cells . 3.2.4.1 Blood-brain barrier protection Blood central nervous system barrier includes blood-brain barrier and blood-spinal barrier, which can control the transport of substances on both sides of the barrier, thus protecting the relatively stable environment in nervous tissue. It is an important immune barrier for ALS, which can effectively prevent toxic substances from infiltrating into the central nervous system from the blood and pump toxins outward . The target of HM protecting blood brain barrier is closely related to tight junction protein. Tight junction protein is composed of three peripheral cytoplasmic proteins, namely, occludingprotein, closure protein and attachment molecule, and closed small cyclic proteins (ZO-1, ZO-2 and ZO-3).Both borneol and astragaloside can increase the expression of atretic zonule 1 and closure protein 5, and their combination with total notoginseng can also inhibit the downregulation of atretic zonule 1 and occlusion protein, thus significantly improving the permeability of the blood-brain barrier . If borneol was administered with safflower, the expression of MMP-2 and MMP-9 could be decreased, and the expression of ZO-1 and closure protein 5 could be increased . Sijunzi Decoction can increase the expression levels of occlusal protein, ZO-1, closure protein 1 and their mRNA . The expression of von Willeophilia factor in serum, vascular endothelial growth factor, MMP-9 and MMP-2 in brain tissue decreased after the use of Buyang Huanwu Decoction, indicating the protective effect of the Blood-brain barrier . Ginsenoside Rg1 can also up-regulate the levels of atretic zonolin-1 and occlusion-protein, and down-regulate the expression of matrix metalloproteinase-2 and matrix metalloproteinase-9 to restore the integrity of the Blood-brain barrier . Breviscapine mainly plays its role by up-regulating the expression of CD63 and the blood-brain barrier tight junction proteins claudin5, occludin and ZO-1 . Jiweiling related preparations such as Jiweiling lyophilized powder can significantly improve blood-brain barrier score, delay neuronal edema and reduce blood-brain barrier permeability in damaged mice . By regulating the permeability of the blood-brain barrier, the active components of Chinese medicine can restore the integrity of the immune barrier in ALS and enhance its self-protection. 3.2.4.2 Regulates microglia Neuroinflammation is an important host defense mechanism that protects the brain from infection or injury and restores normal structure and function , chronic inflammation can induce cytotoxicity and worsen the severity of different neurodegenerative diseases, such as Parkinson’s disease (PD) , multiple sclerosis (MS) , and ALS . Dysregulation of the inflammatory response, characterized by abnormal activation of microglia and overabundance of pro-inflammatory cytokines, leads to the neurodegeneration observed in ALS . Chinese HM has rich historical background, remarkable curative effect and minimal adverse reactions. It acts by regulating microglia activation and polarization, inhibiting inflammatory responses, and by mediating microglia and various related pathways such as NF-κB signaling pathway, Toll-like signaling pathway, Notch signaling pathway, AMPK signaling pathway, MAPK signaling pathway, MAPK signaling pathway, etc .Tripterygium wilfordis extract (Tripterygium wilfordis) can regulate the phosphorylation of kinase 1/2 and nuclear factor kB by blocking extracellular signals to reduce the production of pro-inflammatory factors and nitric oxide, so as to inhibit the anti-inflammatory effect of autoimmune . As resident macrophages of the central nervous system, microglia play a key role in maintaining brain homeostasis , but if microglia are over-activated, it will lead to the release of many pro-inflammatory factors and neurotoxic substances, aggravating the damage. Melittin, an effective component extracted from bee venom, can directly reduce the activity of microglia or indirectly reduce the secretion of inflammatory factors and the phosphorylation level of P38 mitogen-activated protein kinase in brain stem and spinal cord, significantly regulate the inflammation of ALS mice and delay the development of the disease . The extract can inhibit C-JUN amino-terminal kinase signaling pathway, reduce the expression of microglia-induced nitric oxide synthase and cyclooxygenase-2, and play an anti-inflammatory role . The up-regulated expression of TLR4 is a key receptor involved in the activation and function of microglia. The active ingredient Geniposide (GEN) in Gardenia can effectively reduce the expression of TLR4, MyD88, p-IκB, NF-κB, p-ERK1/2 and p38 proteins, thereby playing an anti-inflammatory role and inhibiting the activation of microglia by down-regulating the TLR4-dependent pathway of MyD88 . The triterpenoid saponin compound polyhemigosaponin F (PGSF) extracted from melon seed gold of Polygala can effectively counteract the up-regulation of toll-like receptor 4 (TLR4) in microglia, and down-regulate the expression of nitric oxide synthase (iNOS) and circular cell enzymase-2 (COX-2) induced by cellular inflammatory proteases. Improve the overactivation of microglia and the production of neurotoxic factors, and reduce the damage of nerve cells . KCHO-1, a natural ethanol extract obtained from turmeric, salvia miltiorrhizae, Tianma, papaya and other herbs, can reduce oxidative stress by decreasing gp91phox subtype expression of NADPH oxidase, down-regulating the level of induced-nitro oxide synthase, and alleviating the phosphorylation of P38 mitogen-activated protein kinase and the activation of extracellular signal-regulated kinase 1/2. Inhibit microglia proliferation and activation . Astragaloside IV, total saponins and baicalin can regulate microglia polarization and improve brain tissue inflammation by mediating MAPK signaling pathway. Calycosin has the ability to reduce TNF-α-containing microglia populations by activating the BDNF/TrkB signaling pathway, thereby reducing inflammation and neuronal damage . 3.2.4.3 Inhibition of complement system activation The complement system is a group of activated enzymatically active proteins in human serum and interstitial fluid. It is composed of more than 30 kinds of soluble proteins and membrane binding proteins synthesized by the liver. The complement consists of nine components, named C1, C2, C3,… and C9 . Normal activation of complement is beneficial to enhance immunity, but excessive activation can cause inflammation, tissue damage and various immune hemolysis reactions . Complement activation has long been implicated in the pathogenesis of ALS, and many clinical and animal studies have shown that complement factors, including C1q and C3, have a strong upregulation in the dead regions of motor neurons . HM has a complex mechanism of action and a wide range of effects. According to research, polysaccharides in natural HM are important components of HM to regulate complement activity . C1r, C1s, C3, C4 were the main targets of the crude polysaccharide extract of S. mellowsis, which inhibited the activity of the complement system . The polysaccharide component PsEULl3 of Eucommia ulmoides has very high anti-complement activity against the classical pathway . The APS-2 glucomglycan isolated and purified from the plant has good anti-complement activity, and its targets are Clq, C2, C3, C5 and C9 . Lentinan can decompose complement C3 into allergen C3a, the mechanism of which may be that the recognition spot on complement protein can recognize the structure of polysaccharide and activate the complement system. The main targets of quercetin 7,3’,4’ -trimethyl ether from patchouli were C1q, C2, C5 and C9 . We often know that Chinese herbs often act on the classical pathway of complement to inhibit the activity of the complement system, but in addition, the active ingredients of Chinese medicines can also act on the side pathway. Among them, C1q, C2, C4 and C9 are the main targets of the charming chun components extracted from Knotweed , The extract components can be adjusted by acting on different complement targets or multiple targets together . The -catechin-3-0-b-D-1 (2-cinnamyl) -glucoside isolated and identified from the Chinese herb Anagardia showed different degrees of inhibition on the classical and bypass pathways of the complement system . 3.2.4.4 Regulates T lymphocytes In the central nervous system, CD4+T cells are believed to have neuroprotective effects, which can promote neuroprotection of glial cells and delay the progression of diseases by changing the morphology of glial cells . CD4+T cells can differentiate into regulatory and effector T cells, with the former regulating the proliferation of the latter. In the T-cell-mediated immune response, the imbalance of effector and regulatory T cells will trigger neuroinflammation and eventually lead to neuronal degeneration and necrosis . Regulatory T cells are responsible for regulating the immune response of the body, and usually play an important role in maintaining self-tolerance and avoiding excessive immune response damage to the body. In the early stage of ALS, the level of regulatory T cells increases and thus plays an anti-inflammatory role in the central nervous system, while in the late stage of rapid progression, the level of regulatory T cells decreases and shows the deterioration of neuroinflammation. Effector T cells are cells that proliferate and differentiate after T cells receive antigen stimulation, and have the function of releasing lymphocytes and can actively respond to stimulation. Observation of patients with ALS also showed that increased effector T cells in blood and cerebrospinal fluid were associated with decreased survival, while increased regulatory T cells in blood were associated with improved survival . Trichosanthin extracted from trichosanthes trichosanthes can directly increase the number of regulatory T cells, the level of immune index interleukin-10 and induce higher expression of forked head spiral transcription factor 3, thereby enhancing the immune regulatory ability of regulatory T cells . Dichloromethylamine, a soluble component of wheat zhai, can change AKT phosphorylation signaling, reduce the differentiation of helper T cells 17, and induce the proliferation of regulatory T cells to maintain balance . Zuogui pill can also up-regulate the immune index interleukin 10, so as to improve the immune response function of regulatory T cells . 3.2.4.5 Regulates natural killer cells Natural killer cells are key components of adaptive immunity and active cells with significant toxicity. One study showed an increase in NK cells in the blood of ALS patients compared to controls . At present, there are few studies on the relationship between NK cells and the occurrence and development of ALS, and the relationship between the two is unclear. However, infiltration of NK cells and increased expression of NKG2D ligands on MN were found in the motor cortex and spinal cord of deceased ALS patients, and NK cells were toxic to MN expressing NKG2D ligands. NK cells also secreted IFN-γ to activate microglia and damage regulatory T cells in the spinal cord of mSOD1 mice . These results suggest that NK cells may affect the occurrence and development of ALS through multiple immune mechanisms. Futenge mixture, which is composed of Futenge, Astragalus and red ginseng, can up-regulate CD2, CD95, PD-1 receptors and activated molecules on the surface of natural killer cells, and play an excellent regulatory role on natural killer cells . All the above studies have shown that by influencing related proteins such as atretic zonule 1 and atretic protein 5, reducing the production of pro-inflammatory factors and nitric oxide, acting on complement targets, and upregating cell surface receptor molecules, HM can regulate the stability of the blood-brain barrier, inhibit the activation of microglia and complement system, weaken the toxicity of natural killer cells, and regulate the function of T cells. Thereby alleviating and treating ALS at the level of immune function . Study inclusion For the scoping review section on the immune mechanisms of HM in the treatment of ALS, a total of 3702 articles were extracted from the initial search carried out in two databases. After removing the duplicates, 3694 articles were selected for the first analysis by title and abstract. The full-text analysis included 75 articles, of which 35 were considered for this scoping review . Characteristics of included studies This scoping review encompasses a total of 35 studies, with 25 from China, 5 from Korea, 1 from Japan, and 4 from other Western countries. The included studies range from as early as 2005 to as recent as 2023. There are 13 studies on animal experiments, 10 on cell experiments, 3 on clinical trials, and 9 on pharmacological experiments. Five immune modulation pathways are covered, along with three other mechanisms. The literature on blood-brain barrier protection is the most abundant, with 8 studies. In total, 26 different single drugs and compound formulations are involved . Mechanisms of HM in the treatment of ALS ALS is an incurable neurodegenerative disease that affects the upper and lower motor neurons of the spinal cord, the cerebral cortex, and the spinal cord. The etiology and pathogenesis of ALS remain unknown at present . Here, some studies indicates that HMs show promising potential in combating oxidative stress, excitatory amino acid toxicity, nerve inflammation, and calcium cytotoxicity, offering hope in the treatment of ALS. 3.2.3.1 Excitatory amino acids toxicity In 1957, Lucas and Newhouse’s pioneering research demonstrated the lethal effects of glutamate on neurons in the central nervous system (CNS).Following this, The molecular mechanisms underlying neuronal injury due to excessive glutamate receptor stimulation are starting to be unraveled, indicating that glutamate may exert toxicity on neurons through multiple pathways . The excitotoxicity resulting from the abnormal elevation of extracellular excitatory neurotransmitter glutamate, including the generation of free radicals and lipid superoxide, can induce spontaneous dissolution and degeneration of neurons, contributing to the development of ALS. Numerous experimental findings demonstrate the ability of cryptotanshinone to counter glutamate-induced cytotoxicity and safeguard neurons, indicating its potential utility in mitigating the onset of ALS. The pivotal involvement of the PI3K/Akt signaling pathway in cell survival against glutamate-induced toxicity has been underscored . several Chinese herbs exhibit the capability to inhibit amino acid toxicity and consequently shield neurons. For instance, Acanthopanax extract elevates heme oxygenase (HO)-1 expression, thereby curbing the generation of NO/ROS induced by LPS. Notably, HO-1 expression serves to safeguard cells against glutamate-induced neuronal demise . Additionally, Cryptotanshinone-mediated neuroprotection combats glutamate-induced toxicity by activating the PI3K/Akt pathway and averting the downregulation of Bcl-2 within the anti-apoptotic protein family. Furthermore, Mahesh’s research revealed cryptotanshinone’s capacity to hinder nerve cell apoptosis induced by sodium nitroprusside (SNP), thus exhibiting neuroprotective properties . 3.2.3.2 Oxidative stress Oxidative stress is caused by an imbalance in the production and removal of Reactive oxygen species (ROS) and Reactive nitrogen species (RNS) . Oxidative stress can cause oxidative modification of bioactive molecules such as proteins, lipids, sugars, nucleic acids, etc., so that they lose their original structure and function, affect the normal physiological function of cells, and finally lead to cell degeneration and necrosis. Under normal conditions, free radicals do not cause pathological changes in the body, because the body has enzymes to fight free group damage, such as SOD1, glutathione peroxidase (GSH), catalase (CAT) and non-enzymatic system defense system. Such as non-enzymatic antioxidants carotenoids, tocopherols and vitamin C, as well as free metals and heme-binding proteins. They can suspend the free radical chain reaction, or turn the free radical into a less active substance, so that the production and removal of free radicals are in balance. If the production of free radicals exceeds the body’s ability to remove them, the body will experience oxidative stress . Allicin, the primary compound in garlic oil, has demonstrated its ability to induce phase II enzymes, thereby enhancing antioxidant activity and protecting ALS neurons from oxidative stress . Administering allicin orally with DATS to SOD1-G93A mice at clinical onset induced the expression of HO-1 in the lumbar spinal cord, directly influencing oxidative damage. These findings suggest that oral administration of DATS significantly extends the lifespan of mice . 3.2.3.3 Cytotoxicity of calcium In classical acute excitatory toxicity, the influx of Na+ and Cl- disrupts intracellular Ca2+ homeostasis, triggering a cascade of detrimental biochemical processes. Opening of voltage-gated calcium channels leads to a surge in calcium ions, resulting in excessive release of the excitatory amino acid glutamic acid. This influx of calcium ions through NMDA/AMPA receptors, metabolic glutamic acid receptors, and voltage-dependent calcium channels activates enzymes such as proteases, lipases, kinases, nucleases, and NOS. The generation of free radicals and synthesis of NO further exacerbate neuronal damage, ultimately leading to programmed cell death via apoptosis gene activation. Neuroprotective drugs are thought to primarily act by preventing calcium influx, regulating excitatory amino acid toxicity, and modulating microvascular inflammatory responses. Studies have shown that an extract from the Chinese herb Chuan Xiongqin, can protect nerve cells by lowering intracellular calcium levels and inhibiting glutamate release . Callewaere et al. investigated the protective effect of ligustrazine by stimulating nerve cells with stromal cell-derived factor (SDF-1), which elevates intracellular calcium levels, and then treating them with ligustrazine. They observed a significant decrease in intracellular calcium levels in the ligustrazine-treated group exposed to SDF-1, indicating ligustrazine’s ability to mitigate calcium cytotoxicity and serve as a neuroprotective agent . 3.2.3.4 Other relevant mechanisms Some studies have also found that the incidence of ALS is related to neurotrophic factor deficiency, metal and trace element imbalance, cell apoptosis, viral infection and abnormal neurofilament aggregation. Excitatory amino acids toxicity In 1957, Lucas and Newhouse’s pioneering research demonstrated the lethal effects of glutamate on neurons in the central nervous system (CNS).Following this, The molecular mechanisms underlying neuronal injury due to excessive glutamate receptor stimulation are starting to be unraveled, indicating that glutamate may exert toxicity on neurons through multiple pathways . The excitotoxicity resulting from the abnormal elevation of extracellular excitatory neurotransmitter glutamate, including the generation of free radicals and lipid superoxide, can induce spontaneous dissolution and degeneration of neurons, contributing to the development of ALS. Numerous experimental findings demonstrate the ability of cryptotanshinone to counter glutamate-induced cytotoxicity and safeguard neurons, indicating its potential utility in mitigating the onset of ALS. The pivotal involvement of the PI3K/Akt signaling pathway in cell survival against glutamate-induced toxicity has been underscored . several Chinese herbs exhibit the capability to inhibit amino acid toxicity and consequently shield neurons. For instance, Acanthopanax extract elevates heme oxygenase (HO)-1 expression, thereby curbing the generation of NO/ROS induced by LPS. Notably, HO-1 expression serves to safeguard cells against glutamate-induced neuronal demise . Additionally, Cryptotanshinone-mediated neuroprotection combats glutamate-induced toxicity by activating the PI3K/Akt pathway and averting the downregulation of Bcl-2 within the anti-apoptotic protein family. Furthermore, Mahesh’s research revealed cryptotanshinone’s capacity to hinder nerve cell apoptosis induced by sodium nitroprusside (SNP), thus exhibiting neuroprotective properties . Oxidative stress Oxidative stress is caused by an imbalance in the production and removal of Reactive oxygen species (ROS) and Reactive nitrogen species (RNS) . Oxidative stress can cause oxidative modification of bioactive molecules such as proteins, lipids, sugars, nucleic acids, etc., so that they lose their original structure and function, affect the normal physiological function of cells, and finally lead to cell degeneration and necrosis. Under normal conditions, free radicals do not cause pathological changes in the body, because the body has enzymes to fight free group damage, such as SOD1, glutathione peroxidase (GSH), catalase (CAT) and non-enzymatic system defense system. Such as non-enzymatic antioxidants carotenoids, tocopherols and vitamin C, as well as free metals and heme-binding proteins. They can suspend the free radical chain reaction, or turn the free radical into a less active substance, so that the production and removal of free radicals are in balance. If the production of free radicals exceeds the body’s ability to remove them, the body will experience oxidative stress . Allicin, the primary compound in garlic oil, has demonstrated its ability to induce phase II enzymes, thereby enhancing antioxidant activity and protecting ALS neurons from oxidative stress . Administering allicin orally with DATS to SOD1-G93A mice at clinical onset induced the expression of HO-1 in the lumbar spinal cord, directly influencing oxidative damage. These findings suggest that oral administration of DATS significantly extends the lifespan of mice . Cytotoxicity of calcium In classical acute excitatory toxicity, the influx of Na+ and Cl- disrupts intracellular Ca2+ homeostasis, triggering a cascade of detrimental biochemical processes. Opening of voltage-gated calcium channels leads to a surge in calcium ions, resulting in excessive release of the excitatory amino acid glutamic acid. This influx of calcium ions through NMDA/AMPA receptors, metabolic glutamic acid receptors, and voltage-dependent calcium channels activates enzymes such as proteases, lipases, kinases, nucleases, and NOS. The generation of free radicals and synthesis of NO further exacerbate neuronal damage, ultimately leading to programmed cell death via apoptosis gene activation. Neuroprotective drugs are thought to primarily act by preventing calcium influx, regulating excitatory amino acid toxicity, and modulating microvascular inflammatory responses. Studies have shown that an extract from the Chinese herb Chuan Xiongqin, can protect nerve cells by lowering intracellular calcium levels and inhibiting glutamate release . Callewaere et al. investigated the protective effect of ligustrazine by stimulating nerve cells with stromal cell-derived factor (SDF-1), which elevates intracellular calcium levels, and then treating them with ligustrazine. They observed a significant decrease in intracellular calcium levels in the ligustrazine-treated group exposed to SDF-1, indicating ligustrazine’s ability to mitigate calcium cytotoxicity and serve as a neuroprotective agent . Other relevant mechanisms Some studies have also found that the incidence of ALS is related to neurotrophic factor deficiency, metal and trace element imbalance, cell apoptosis, viral infection and abnormal neurofilament aggregation. The possible immunological mechanisms of HM in treating ALS HM has better anti-inflammatory properties and extensive immunomodulatory effects, and can have better curative effect in the diseases of immune system disorders. ALS can be alleviated and treated by protecting the blood-brain barrier, anti-neuroinflammation, inhibiting the activation of complement system, inhibiting natural killer cytotoxicity and regulating T cells . 3.2.4.1 Blood-brain barrier protection Blood central nervous system barrier includes blood-brain barrier and blood-spinal barrier, which can control the transport of substances on both sides of the barrier, thus protecting the relatively stable environment in nervous tissue. It is an important immune barrier for ALS, which can effectively prevent toxic substances from infiltrating into the central nervous system from the blood and pump toxins outward . The target of HM protecting blood brain barrier is closely related to tight junction protein. Tight junction protein is composed of three peripheral cytoplasmic proteins, namely, occludingprotein, closure protein and attachment molecule, and closed small cyclic proteins (ZO-1, ZO-2 and ZO-3).Both borneol and astragaloside can increase the expression of atretic zonule 1 and closure protein 5, and their combination with total notoginseng can also inhibit the downregulation of atretic zonule 1 and occlusion protein, thus significantly improving the permeability of the blood-brain barrier . If borneol was administered with safflower, the expression of MMP-2 and MMP-9 could be decreased, and the expression of ZO-1 and closure protein 5 could be increased . Sijunzi Decoction can increase the expression levels of occlusal protein, ZO-1, closure protein 1 and their mRNA . The expression of von Willeophilia factor in serum, vascular endothelial growth factor, MMP-9 and MMP-2 in brain tissue decreased after the use of Buyang Huanwu Decoction, indicating the protective effect of the Blood-brain barrier . Ginsenoside Rg1 can also up-regulate the levels of atretic zonolin-1 and occlusion-protein, and down-regulate the expression of matrix metalloproteinase-2 and matrix metalloproteinase-9 to restore the integrity of the Blood-brain barrier . Breviscapine mainly plays its role by up-regulating the expression of CD63 and the blood-brain barrier tight junction proteins claudin5, occludin and ZO-1 . Jiweiling related preparations such as Jiweiling lyophilized powder can significantly improve blood-brain barrier score, delay neuronal edema and reduce blood-brain barrier permeability in damaged mice . By regulating the permeability of the blood-brain barrier, the active components of Chinese medicine can restore the integrity of the immune barrier in ALS and enhance its self-protection. 3.2.4.2 Regulates microglia Neuroinflammation is an important host defense mechanism that protects the brain from infection or injury and restores normal structure and function , chronic inflammation can induce cytotoxicity and worsen the severity of different neurodegenerative diseases, such as Parkinson’s disease (PD) , multiple sclerosis (MS) , and ALS . Dysregulation of the inflammatory response, characterized by abnormal activation of microglia and overabundance of pro-inflammatory cytokines, leads to the neurodegeneration observed in ALS . Chinese HM has rich historical background, remarkable curative effect and minimal adverse reactions. It acts by regulating microglia activation and polarization, inhibiting inflammatory responses, and by mediating microglia and various related pathways such as NF-κB signaling pathway, Toll-like signaling pathway, Notch signaling pathway, AMPK signaling pathway, MAPK signaling pathway, MAPK signaling pathway, etc .Tripterygium wilfordis extract (Tripterygium wilfordis) can regulate the phosphorylation of kinase 1/2 and nuclear factor kB by blocking extracellular signals to reduce the production of pro-inflammatory factors and nitric oxide, so as to inhibit the anti-inflammatory effect of autoimmune . As resident macrophages of the central nervous system, microglia play a key role in maintaining brain homeostasis , but if microglia are over-activated, it will lead to the release of many pro-inflammatory factors and neurotoxic substances, aggravating the damage. Melittin, an effective component extracted from bee venom, can directly reduce the activity of microglia or indirectly reduce the secretion of inflammatory factors and the phosphorylation level of P38 mitogen-activated protein kinase in brain stem and spinal cord, significantly regulate the inflammation of ALS mice and delay the development of the disease . The extract can inhibit C-JUN amino-terminal kinase signaling pathway, reduce the expression of microglia-induced nitric oxide synthase and cyclooxygenase-2, and play an anti-inflammatory role . The up-regulated expression of TLR4 is a key receptor involved in the activation and function of microglia. The active ingredient Geniposide (GEN) in Gardenia can effectively reduce the expression of TLR4, MyD88, p-IκB, NF-κB, p-ERK1/2 and p38 proteins, thereby playing an anti-inflammatory role and inhibiting the activation of microglia by down-regulating the TLR4-dependent pathway of MyD88 . The triterpenoid saponin compound polyhemigosaponin F (PGSF) extracted from melon seed gold of Polygala can effectively counteract the up-regulation of toll-like receptor 4 (TLR4) in microglia, and down-regulate the expression of nitric oxide synthase (iNOS) and circular cell enzymase-2 (COX-2) induced by cellular inflammatory proteases. Improve the overactivation of microglia and the production of neurotoxic factors, and reduce the damage of nerve cells . KCHO-1, a natural ethanol extract obtained from turmeric, salvia miltiorrhizae, Tianma, papaya and other herbs, can reduce oxidative stress by decreasing gp91phox subtype expression of NADPH oxidase, down-regulating the level of induced-nitro oxide synthase, and alleviating the phosphorylation of P38 mitogen-activated protein kinase and the activation of extracellular signal-regulated kinase 1/2. Inhibit microglia proliferation and activation . Astragaloside IV, total saponins and baicalin can regulate microglia polarization and improve brain tissue inflammation by mediating MAPK signaling pathway. Calycosin has the ability to reduce TNF-α-containing microglia populations by activating the BDNF/TrkB signaling pathway, thereby reducing inflammation and neuronal damage . 3.2.4.3 Inhibition of complement system activation The complement system is a group of activated enzymatically active proteins in human serum and interstitial fluid. It is composed of more than 30 kinds of soluble proteins and membrane binding proteins synthesized by the liver. The complement consists of nine components, named C1, C2, C3,… and C9 . Normal activation of complement is beneficial to enhance immunity, but excessive activation can cause inflammation, tissue damage and various immune hemolysis reactions . Complement activation has long been implicated in the pathogenesis of ALS, and many clinical and animal studies have shown that complement factors, including C1q and C3, have a strong upregulation in the dead regions of motor neurons . HM has a complex mechanism of action and a wide range of effects. According to research, polysaccharides in natural HM are important components of HM to regulate complement activity . C1r, C1s, C3, C4 were the main targets of the crude polysaccharide extract of S. mellowsis, which inhibited the activity of the complement system . The polysaccharide component PsEULl3 of Eucommia ulmoides has very high anti-complement activity against the classical pathway . The APS-2 glucomglycan isolated and purified from the plant has good anti-complement activity, and its targets are Clq, C2, C3, C5 and C9 . Lentinan can decompose complement C3 into allergen C3a, the mechanism of which may be that the recognition spot on complement protein can recognize the structure of polysaccharide and activate the complement system. The main targets of quercetin 7,3’,4’ -trimethyl ether from patchouli were C1q, C2, C5 and C9 . We often know that Chinese herbs often act on the classical pathway of complement to inhibit the activity of the complement system, but in addition, the active ingredients of Chinese medicines can also act on the side pathway. Among them, C1q, C2, C4 and C9 are the main targets of the charming chun components extracted from Knotweed , The extract components can be adjusted by acting on different complement targets or multiple targets together . The -catechin-3-0-b-D-1 (2-cinnamyl) -glucoside isolated and identified from the Chinese herb Anagardia showed different degrees of inhibition on the classical and bypass pathways of the complement system . 3.2.4.4 Regulates T lymphocytes In the central nervous system, CD4+T cells are believed to have neuroprotective effects, which can promote neuroprotection of glial cells and delay the progression of diseases by changing the morphology of glial cells . CD4+T cells can differentiate into regulatory and effector T cells, with the former regulating the proliferation of the latter. In the T-cell-mediated immune response, the imbalance of effector and regulatory T cells will trigger neuroinflammation and eventually lead to neuronal degeneration and necrosis . Regulatory T cells are responsible for regulating the immune response of the body, and usually play an important role in maintaining self-tolerance and avoiding excessive immune response damage to the body. In the early stage of ALS, the level of regulatory T cells increases and thus plays an anti-inflammatory role in the central nervous system, while in the late stage of rapid progression, the level of regulatory T cells decreases and shows the deterioration of neuroinflammation. Effector T cells are cells that proliferate and differentiate after T cells receive antigen stimulation, and have the function of releasing lymphocytes and can actively respond to stimulation. Observation of patients with ALS also showed that increased effector T cells in blood and cerebrospinal fluid were associated with decreased survival, while increased regulatory T cells in blood were associated with improved survival . Trichosanthin extracted from trichosanthes trichosanthes can directly increase the number of regulatory T cells, the level of immune index interleukin-10 and induce higher expression of forked head spiral transcription factor 3, thereby enhancing the immune regulatory ability of regulatory T cells . Dichloromethylamine, a soluble component of wheat zhai, can change AKT phosphorylation signaling, reduce the differentiation of helper T cells 17, and induce the proliferation of regulatory T cells to maintain balance . Zuogui pill can also up-regulate the immune index interleukin 10, so as to improve the immune response function of regulatory T cells . 3.2.4.5 Regulates natural killer cells Natural killer cells are key components of adaptive immunity and active cells with significant toxicity. One study showed an increase in NK cells in the blood of ALS patients compared to controls . At present, there are few studies on the relationship between NK cells and the occurrence and development of ALS, and the relationship between the two is unclear. However, infiltration of NK cells and increased expression of NKG2D ligands on MN were found in the motor cortex and spinal cord of deceased ALS patients, and NK cells were toxic to MN expressing NKG2D ligands. NK cells also secreted IFN-γ to activate microglia and damage regulatory T cells in the spinal cord of mSOD1 mice . These results suggest that NK cells may affect the occurrence and development of ALS through multiple immune mechanisms. Futenge mixture, which is composed of Futenge, Astragalus and red ginseng, can up-regulate CD2, CD95, PD-1 receptors and activated molecules on the surface of natural killer cells, and play an excellent regulatory role on natural killer cells . All the above studies have shown that by influencing related proteins such as atretic zonule 1 and atretic protein 5, reducing the production of pro-inflammatory factors and nitric oxide, acting on complement targets, and upregating cell surface receptor molecules, HM can regulate the stability of the blood-brain barrier, inhibit the activation of microglia and complement system, weaken the toxicity of natural killer cells, and regulate the function of T cells. Thereby alleviating and treating ALS at the level of immune function . Blood-brain barrier protection Blood central nervous system barrier includes blood-brain barrier and blood-spinal barrier, which can control the transport of substances on both sides of the barrier, thus protecting the relatively stable environment in nervous tissue. It is an important immune barrier for ALS, which can effectively prevent toxic substances from infiltrating into the central nervous system from the blood and pump toxins outward . The target of HM protecting blood brain barrier is closely related to tight junction protein. Tight junction protein is composed of three peripheral cytoplasmic proteins, namely, occludingprotein, closure protein and attachment molecule, and closed small cyclic proteins (ZO-1, ZO-2 and ZO-3).Both borneol and astragaloside can increase the expression of atretic zonule 1 and closure protein 5, and their combination with total notoginseng can also inhibit the downregulation of atretic zonule 1 and occlusion protein, thus significantly improving the permeability of the blood-brain barrier . If borneol was administered with safflower, the expression of MMP-2 and MMP-9 could be decreased, and the expression of ZO-1 and closure protein 5 could be increased . Sijunzi Decoction can increase the expression levels of occlusal protein, ZO-1, closure protein 1 and their mRNA . The expression of von Willeophilia factor in serum, vascular endothelial growth factor, MMP-9 and MMP-2 in brain tissue decreased after the use of Buyang Huanwu Decoction, indicating the protective effect of the Blood-brain barrier . Ginsenoside Rg1 can also up-regulate the levels of atretic zonolin-1 and occlusion-protein, and down-regulate the expression of matrix metalloproteinase-2 and matrix metalloproteinase-9 to restore the integrity of the Blood-brain barrier . Breviscapine mainly plays its role by up-regulating the expression of CD63 and the blood-brain barrier tight junction proteins claudin5, occludin and ZO-1 . Jiweiling related preparations such as Jiweiling lyophilized powder can significantly improve blood-brain barrier score, delay neuronal edema and reduce blood-brain barrier permeability in damaged mice . By regulating the permeability of the blood-brain barrier, the active components of Chinese medicine can restore the integrity of the immune barrier in ALS and enhance its self-protection. Regulates microglia Neuroinflammation is an important host defense mechanism that protects the brain from infection or injury and restores normal structure and function , chronic inflammation can induce cytotoxicity and worsen the severity of different neurodegenerative diseases, such as Parkinson’s disease (PD) , multiple sclerosis (MS) , and ALS . Dysregulation of the inflammatory response, characterized by abnormal activation of microglia and overabundance of pro-inflammatory cytokines, leads to the neurodegeneration observed in ALS . Chinese HM has rich historical background, remarkable curative effect and minimal adverse reactions. It acts by regulating microglia activation and polarization, inhibiting inflammatory responses, and by mediating microglia and various related pathways such as NF-κB signaling pathway, Toll-like signaling pathway, Notch signaling pathway, AMPK signaling pathway, MAPK signaling pathway, MAPK signaling pathway, etc .Tripterygium wilfordis extract (Tripterygium wilfordis) can regulate the phosphorylation of kinase 1/2 and nuclear factor kB by blocking extracellular signals to reduce the production of pro-inflammatory factors and nitric oxide, so as to inhibit the anti-inflammatory effect of autoimmune . As resident macrophages of the central nervous system, microglia play a key role in maintaining brain homeostasis , but if microglia are over-activated, it will lead to the release of many pro-inflammatory factors and neurotoxic substances, aggravating the damage. Melittin, an effective component extracted from bee venom, can directly reduce the activity of microglia or indirectly reduce the secretion of inflammatory factors and the phosphorylation level of P38 mitogen-activated protein kinase in brain stem and spinal cord, significantly regulate the inflammation of ALS mice and delay the development of the disease . The extract can inhibit C-JUN amino-terminal kinase signaling pathway, reduce the expression of microglia-induced nitric oxide synthase and cyclooxygenase-2, and play an anti-inflammatory role . The up-regulated expression of TLR4 is a key receptor involved in the activation and function of microglia. The active ingredient Geniposide (GEN) in Gardenia can effectively reduce the expression of TLR4, MyD88, p-IκB, NF-κB, p-ERK1/2 and p38 proteins, thereby playing an anti-inflammatory role and inhibiting the activation of microglia by down-regulating the TLR4-dependent pathway of MyD88 . The triterpenoid saponin compound polyhemigosaponin F (PGSF) extracted from melon seed gold of Polygala can effectively counteract the up-regulation of toll-like receptor 4 (TLR4) in microglia, and down-regulate the expression of nitric oxide synthase (iNOS) and circular cell enzymase-2 (COX-2) induced by cellular inflammatory proteases. Improve the overactivation of microglia and the production of neurotoxic factors, and reduce the damage of nerve cells . KCHO-1, a natural ethanol extract obtained from turmeric, salvia miltiorrhizae, Tianma, papaya and other herbs, can reduce oxidative stress by decreasing gp91phox subtype expression of NADPH oxidase, down-regulating the level of induced-nitro oxide synthase, and alleviating the phosphorylation of P38 mitogen-activated protein kinase and the activation of extracellular signal-regulated kinase 1/2. Inhibit microglia proliferation and activation . Astragaloside IV, total saponins and baicalin can regulate microglia polarization and improve brain tissue inflammation by mediating MAPK signaling pathway. Calycosin has the ability to reduce TNF-α-containing microglia populations by activating the BDNF/TrkB signaling pathway, thereby reducing inflammation and neuronal damage . Inhibition of complement system activation The complement system is a group of activated enzymatically active proteins in human serum and interstitial fluid. It is composed of more than 30 kinds of soluble proteins and membrane binding proteins synthesized by the liver. The complement consists of nine components, named C1, C2, C3,… and C9 . Normal activation of complement is beneficial to enhance immunity, but excessive activation can cause inflammation, tissue damage and various immune hemolysis reactions . Complement activation has long been implicated in the pathogenesis of ALS, and many clinical and animal studies have shown that complement factors, including C1q and C3, have a strong upregulation in the dead regions of motor neurons . HM has a complex mechanism of action and a wide range of effects. According to research, polysaccharides in natural HM are important components of HM to regulate complement activity . C1r, C1s, C3, C4 were the main targets of the crude polysaccharide extract of S. mellowsis, which inhibited the activity of the complement system . The polysaccharide component PsEULl3 of Eucommia ulmoides has very high anti-complement activity against the classical pathway . The APS-2 glucomglycan isolated and purified from the plant has good anti-complement activity, and its targets are Clq, C2, C3, C5 and C9 . Lentinan can decompose complement C3 into allergen C3a, the mechanism of which may be that the recognition spot on complement protein can recognize the structure of polysaccharide and activate the complement system. The main targets of quercetin 7,3’,4’ -trimethyl ether from patchouli were C1q, C2, C5 and C9 . We often know that Chinese herbs often act on the classical pathway of complement to inhibit the activity of the complement system, but in addition, the active ingredients of Chinese medicines can also act on the side pathway. Among them, C1q, C2, C4 and C9 are the main targets of the charming chun components extracted from Knotweed , The extract components can be adjusted by acting on different complement targets or multiple targets together . The -catechin-3-0-b-D-1 (2-cinnamyl) -glucoside isolated and identified from the Chinese herb Anagardia showed different degrees of inhibition on the classical and bypass pathways of the complement system . Regulates T lymphocytes In the central nervous system, CD4+T cells are believed to have neuroprotective effects, which can promote neuroprotection of glial cells and delay the progression of diseases by changing the morphology of glial cells . CD4+T cells can differentiate into regulatory and effector T cells, with the former regulating the proliferation of the latter. In the T-cell-mediated immune response, the imbalance of effector and regulatory T cells will trigger neuroinflammation and eventually lead to neuronal degeneration and necrosis . Regulatory T cells are responsible for regulating the immune response of the body, and usually play an important role in maintaining self-tolerance and avoiding excessive immune response damage to the body. In the early stage of ALS, the level of regulatory T cells increases and thus plays an anti-inflammatory role in the central nervous system, while in the late stage of rapid progression, the level of regulatory T cells decreases and shows the deterioration of neuroinflammation. Effector T cells are cells that proliferate and differentiate after T cells receive antigen stimulation, and have the function of releasing lymphocytes and can actively respond to stimulation. Observation of patients with ALS also showed that increased effector T cells in blood and cerebrospinal fluid were associated with decreased survival, while increased regulatory T cells in blood were associated with improved survival . Trichosanthin extracted from trichosanthes trichosanthes can directly increase the number of regulatory T cells, the level of immune index interleukin-10 and induce higher expression of forked head spiral transcription factor 3, thereby enhancing the immune regulatory ability of regulatory T cells . Dichloromethylamine, a soluble component of wheat zhai, can change AKT phosphorylation signaling, reduce the differentiation of helper T cells 17, and induce the proliferation of regulatory T cells to maintain balance . Zuogui pill can also up-regulate the immune index interleukin 10, so as to improve the immune response function of regulatory T cells . Regulates natural killer cells Natural killer cells are key components of adaptive immunity and active cells with significant toxicity. One study showed an increase in NK cells in the blood of ALS patients compared to controls . At present, there are few studies on the relationship between NK cells and the occurrence and development of ALS, and the relationship between the two is unclear. However, infiltration of NK cells and increased expression of NKG2D ligands on MN were found in the motor cortex and spinal cord of deceased ALS patients, and NK cells were toxic to MN expressing NKG2D ligands. NK cells also secreted IFN-γ to activate microglia and damage regulatory T cells in the spinal cord of mSOD1 mice . These results suggest that NK cells may affect the occurrence and development of ALS through multiple immune mechanisms. Futenge mixture, which is composed of Futenge, Astragalus and red ginseng, can up-regulate CD2, CD95, PD-1 receptors and activated molecules on the surface of natural killer cells, and play an excellent regulatory role on natural killer cells . All the above studies have shown that by influencing related proteins such as atretic zonule 1 and atretic protein 5, reducing the production of pro-inflammatory factors and nitric oxide, acting on complement targets, and upregating cell surface receptor molecules, HM can regulate the stability of the blood-brain barrier, inhibit the activation of microglia and complement system, weaken the toxicity of natural killer cells, and regulate the function of T cells. Thereby alleviating and treating ALS at the level of immune function . Discussion 4.1 Summary of results ALS poses a significant challenge to the medical community due to its nature as a neurodegenerative disease. Adjunct therapies, such as HM treatment, have garnered attention as potential avenues for novel therapeutic approaches, particularly in efforts to prolong the progression of ALS. This study aims to investigate the efficacy of HM therapy in an ALS mouse model, probing its potential mechanisms of action and its impact on immune system regulation. In this meta-analysis, we synthesized data from 18 studies involving a total of 443 animals, aiming to evaluate the effects of herbal treatments on a mouse model of ALS. By amalgamating findings from these investigations, we gained a more comprehensive understanding of the potential of herbal therapy for ALS management. The research demonstrated significant positive effects of herbal treatments in ALS, highlighting the role of their active constituents in neurogenic regulatory pathways crucial for delaying neurodegeneration. Our meta-analysis results showed notable effects of herbal medicine in ALS mice. However, we also observed that the heterogeneity (I² value) in most analyses was quite high, suggesting significant variations in experimental design, animal models, treatment doses, and duration across the included studies. This heterogeneity may impact the stability and interpretability of the meta-analysis results. Therefore, caution is needed when interpreting these findings. To address the issue of heterogeneity, we employed a random-effects model to account for variability across studies, aiming to minimize the impact of these differences. Additionally, we conducted sensitivity analyses to assess the influence of individual studies on the overall results. Despite these efforts, it is important to acknowledge that high heterogeneity may reduce the generalizability of these findings. Despite these challenges, this study provides valuable insights into the potential of herbal medicine in ALS treatment. Herbal treatments may protect motor neurons and slow disease progression. However, future research should be conducted under more consistent study designs to reduce heterogeneity and further validate these findings. The scoping review involved 35studies. Immune disorders play a crucial role in the pathological process of ALS. Studies have found that the activation of microglia and astrocytes in the central nervous system, as well as the increase in the number of pro-inflammatory peripheral lymphocytes and macrophages, directly affect the occurrence and progression of ALS. In particular, genetic mutations associated with ALS, especially SOD1, are thought to further increase neuroinflammation levels, further confirming the imbalance of the immune system in ALS pathophysiology. Clinical studies have not only revealed the influence of genetic variants on immune disorders, but have found that even in the absence of significant genetic changes, immune disorders lead to impaired function of regulatory T lymphocytes and increased proinflammatory macrophages. This further underscores the importance of the immune system in the onset and progression of ALS and underscores the need to consider immune regulation in treatment. In the future, developing effective methods to monitor the pathophysiology and progression of inflammation-mediated diseases will be important. Chemokines, especially CXC chemokines, may play an important role in the pathophysiology of ALS. Understanding their role in the onset and progression of ALS, as well as their modulated treatment of the disease, may provide new directions for the treatment of ALS. HMs plays an important role in ALS treatment because of its immunomodulatory properties. Whether it is a single Chinese medicine or a compound Chinese medicine, ALS can be treated by protecting the blood-brain barrier, anti-neuroinflammation, inhibiting the activation of the complement system, regulating the toxicity of natural killer cells, and regulating the immune response mediated by T cells. Based on a bone marrow transplant experiment to evaluate the contribution of the immune system to movement disorders, specifically the role of CD8 T cells. One study concluded that pathological Senataxin expression in the hematopoietic system is necessary for the development of motor phenotypes in mice, supporting the idea that dysfunction of the nervous system and hematopoietic/immune system contributes to the onset or progression of ALS disease. While existing treatment options offer hope to ALS patients, there is a need to further delve into the mechanisms of immune disease and explore more personalized and integrated treatment strategies to address this challenging disease. By targeting treatments for diseases of the immune system, it is possible to slow disease progression, improve patients’ quality of life and extend their survival. These efforts will provide more hope and possibilities for future ALS treatments. 4.2 Study quality and risk of bias Based on the evaluation results, the studies included in the review were assessed using CAMARADES and SYRCLE’s ROB tools to evaluate their methodological quality and risk of bias. For CAMARADES, the quality scores of the studies ranged from 4 to 8, with a total score of 10. Specifically, one study received a score of 4; two studies received a score of 5; four studies received a score of 6; seven studies received a score of 7, and four studies received a score of 8. These scores indicate a moderate to high level of methodological quality in the included studies. In contrast, for SYRCLE’s ROB, the quality scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3; five studies received a score of 4; seven studies received a score of 5; three studies received a score of 6, and one study received a score of 7. These scores suggest a moderate level of risk of bias in the included studies. 4.3 Analysis and discussion of therapeutic effect Many studies on the efficacy of HM in treating ALS have highlighted affirmative results, with most studies yielding positive outcomes. This study conducted a meta-analysis of 18 animal studies investigating the efficacy of HM in treating ALS to provide a more objective and comprehensive evaluation. Among the included studies, a limited number were deemed high quality based on bias risk assessment. While two studies provided detailed descriptions of randomization using random number tables, the methods were unspecified in the remaining ten. Six studies did not specify the allocation concealment method, and the blinding implementation was only mentioned in three studies, with the remainder not addressing blinding. The primary outcome measures varied across the trials, with the majority utilizing onset time and survival time as endpoints; specifically, 11 out of 18 studies used onset time, while 16 used survival time. Some studies employed measures of motor function (e.g., stride length) and neuron-related indicators as efficacy measures, but the evaluation methods were inconsistent. Three studies reported disease duration in experimental animals as an outcome measure. A remarkable effects of HM in ALS mice, including onset time SMD=1.75, 95% CI (1.14 ~ 2.36), Z = 5.60, P < 0.01), survival time(SMD = 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01), stride length SMD=1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01) and duration time MD=6.79, 95% CI [-0.28, 13.87], Z=1.88, P =0.06), showing HM’s certain efficiency in treating ALS mice. 4.5 Implication This study employed a comprehensive multidimensional systematic evaluation approach to thoroughly investigate the efficacy of HM in treating ALS animal models. Through a comprehensive reanalysis of various outcome indicators, we aimed to provide evidence-based medical support for the clinical application of HM in ALS treatment. This study utilized a rigorous scientific statistical method—meta-analysis—which not only opens new avenues for clinical research on herbal medicine in our country but also promotes the integration of HM with modern evidence-based medicine. Specifically, this study summarized recent research on HM treatment for ALS. Through systematic review, we analyzed the effects of HM on ALS onset time and survival time, identifying its potential efficacy in delaying disease progression and extending survival. However, while the systematic review provided overall efficacy data from preclinical studies, it could not fully elucidate the underlying biological mechanisms. Therefore, we conducted a scoping review and found that HM plays a significant role in immunotherapy for ALS. HM modulates innate and adaptive immune processes by reshaping the blood-brain barrier, inhibiting natural killer cell activity, suppressing complement system activation, regulating microglial cell activity, and restoring T cell function, thereby protecting motor neurons from toxic damage. These mechanisms not only provide a biological explanation for the efficacy observed in the systematic review but also deepen our understanding of the mechanisms through which HM acts in ALS treatment. These findings lay the foundation for further understanding the immunomodulatory mechanisms of HM in ALS treatment. Combining the systematic review with the scoping review allowed us to delve deeper into the immune mechanisms of HM based on quantitative analysis of its efficacy. This dual-pronged research approach not only provides more comprehensive insights into current ALS research but also guides future clinical applications and drug development. The significance of this study lies in its ability to deepen our understanding of HM in ALS treatment while also providing scientific evidence for the integration of modern evidence-based medicine with traditional herbal medicine. By reviewing the immunomodulatory mechanisms of HM in ALS treatment, we provide a theoretical basis and new therapeutic strategies for this refractory disease. Future research should continue to explore the potential role of HM in ALS treatment, aiming to bring more effective treatment options to ALS patients. This comprehensive analysis not only enhances our understanding of ALS treatment methods but also advances the clinical application of HM in ALS, paving the way for research on refractory diseases. Through rigorous scientific research design and multidimensional evaluation methods, we provide a solid foundation for future ALS research and offer important references for the application of HM in modern medicine. 4.4 Limitations Currently, Trials on the treatment of ALS with HMs are relatively scarce. However, our study faced some limitations, including the limitation of being limited to publicly published literature searches. This means that we may have missed some relevant grey literature that may contain important information on the effectiveness of HM in treating ALS. Second, our search covers only English and Chinese studies, which may introduce a degree of linguistic bias as studies in other languages may exist but are not taken into account. In addition, we need to recognize the existence of publication bias, that is, negative study results are relatively less likely to be published, which may cause our analysis to be influenced by positive studies and overestimate the therapeutic effect. These limitations and biases may lead to varying degrees of bias in our findings. First, the incomplete coverage of the literature may mean that we miss some potentially important studies, resulting in an insufficiently comprehensive overall assessment of the efficacy of HM in the treatment of ALS. Secondly, due to the limited number and varying quality of studies, the results of our analysis may lack robustness and have certain uncertainties. In addition, due to the insufficient sample size, our analysis may lack the statistical power to draw accurate conclusions or generalize to the entire ALS patient population. Therefore, in order to more fully evaluate the efficacy of HM in the treatment of ALS, future studies need to overcome these limitations and biases. This could include expanding literature searches to include studies in grey literature and other languages, as well as strengthening assessments of research quality and controlling the effects of publication bias. At the same time, efforts should be made to increase the sample size and improve the quality of studies to ensure more reliable and robust analysis results and provide more convincing evidence support for the treatment of ALS with HM. Thirdly, A limitation of our study is the lack of detailed exploration of the side effects of HMs. Since our study focused on a preclinical systematic review, the issue of side effects of HM is rarely addressed in the existing literature. Detailed side-effect studies usually need to be performed in clinical studies. Considering the herbal medicine is widely used in the treatment of understanding the potential side effects is very important to ensure patient safety. We expect that future clinical studies will explore this issue in depth and provide more comprehensive and reliable data for systematic evaluation of the safety of Chinese herbal medicines. Summary of results ALS poses a significant challenge to the medical community due to its nature as a neurodegenerative disease. Adjunct therapies, such as HM treatment, have garnered attention as potential avenues for novel therapeutic approaches, particularly in efforts to prolong the progression of ALS. This study aims to investigate the efficacy of HM therapy in an ALS mouse model, probing its potential mechanisms of action and its impact on immune system regulation. In this meta-analysis, we synthesized data from 18 studies involving a total of 443 animals, aiming to evaluate the effects of herbal treatments on a mouse model of ALS. By amalgamating findings from these investigations, we gained a more comprehensive understanding of the potential of herbal therapy for ALS management. The research demonstrated significant positive effects of herbal treatments in ALS, highlighting the role of their active constituents in neurogenic regulatory pathways crucial for delaying neurodegeneration. Our meta-analysis results showed notable effects of herbal medicine in ALS mice. However, we also observed that the heterogeneity (I² value) in most analyses was quite high, suggesting significant variations in experimental design, animal models, treatment doses, and duration across the included studies. This heterogeneity may impact the stability and interpretability of the meta-analysis results. Therefore, caution is needed when interpreting these findings. To address the issue of heterogeneity, we employed a random-effects model to account for variability across studies, aiming to minimize the impact of these differences. Additionally, we conducted sensitivity analyses to assess the influence of individual studies on the overall results. Despite these efforts, it is important to acknowledge that high heterogeneity may reduce the generalizability of these findings. Despite these challenges, this study provides valuable insights into the potential of herbal medicine in ALS treatment. Herbal treatments may protect motor neurons and slow disease progression. However, future research should be conducted under more consistent study designs to reduce heterogeneity and further validate these findings. The scoping review involved 35studies. Immune disorders play a crucial role in the pathological process of ALS. Studies have found that the activation of microglia and astrocytes in the central nervous system, as well as the increase in the number of pro-inflammatory peripheral lymphocytes and macrophages, directly affect the occurrence and progression of ALS. In particular, genetic mutations associated with ALS, especially SOD1, are thought to further increase neuroinflammation levels, further confirming the imbalance of the immune system in ALS pathophysiology. Clinical studies have not only revealed the influence of genetic variants on immune disorders, but have found that even in the absence of significant genetic changes, immune disorders lead to impaired function of regulatory T lymphocytes and increased proinflammatory macrophages. This further underscores the importance of the immune system in the onset and progression of ALS and underscores the need to consider immune regulation in treatment. In the future, developing effective methods to monitor the pathophysiology and progression of inflammation-mediated diseases will be important. Chemokines, especially CXC chemokines, may play an important role in the pathophysiology of ALS. Understanding their role in the onset and progression of ALS, as well as their modulated treatment of the disease, may provide new directions for the treatment of ALS. HMs plays an important role in ALS treatment because of its immunomodulatory properties. Whether it is a single Chinese medicine or a compound Chinese medicine, ALS can be treated by protecting the blood-brain barrier, anti-neuroinflammation, inhibiting the activation of the complement system, regulating the toxicity of natural killer cells, and regulating the immune response mediated by T cells. Based on a bone marrow transplant experiment to evaluate the contribution of the immune system to movement disorders, specifically the role of CD8 T cells. One study concluded that pathological Senataxin expression in the hematopoietic system is necessary for the development of motor phenotypes in mice, supporting the idea that dysfunction of the nervous system and hematopoietic/immune system contributes to the onset or progression of ALS disease. While existing treatment options offer hope to ALS patients, there is a need to further delve into the mechanisms of immune disease and explore more personalized and integrated treatment strategies to address this challenging disease. By targeting treatments for diseases of the immune system, it is possible to slow disease progression, improve patients’ quality of life and extend their survival. These efforts will provide more hope and possibilities for future ALS treatments. Study quality and risk of bias Based on the evaluation results, the studies included in the review were assessed using CAMARADES and SYRCLE’s ROB tools to evaluate their methodological quality and risk of bias. For CAMARADES, the quality scores of the studies ranged from 4 to 8, with a total score of 10. Specifically, one study received a score of 4; two studies received a score of 5; four studies received a score of 6; seven studies received a score of 7, and four studies received a score of 8. These scores indicate a moderate to high level of methodological quality in the included studies. In contrast, for SYRCLE’s ROB, the quality scores of the studies ranged from 3 to 7, with a total score of 10. Two studies received a score of 3; five studies received a score of 4; seven studies received a score of 5; three studies received a score of 6, and one study received a score of 7. These scores suggest a moderate level of risk of bias in the included studies. Analysis and discussion of therapeutic effect Many studies on the efficacy of HM in treating ALS have highlighted affirmative results, with most studies yielding positive outcomes. This study conducted a meta-analysis of 18 animal studies investigating the efficacy of HM in treating ALS to provide a more objective and comprehensive evaluation. Among the included studies, a limited number were deemed high quality based on bias risk assessment. While two studies provided detailed descriptions of randomization using random number tables, the methods were unspecified in the remaining ten. Six studies did not specify the allocation concealment method, and the blinding implementation was only mentioned in three studies, with the remainder not addressing blinding. The primary outcome measures varied across the trials, with the majority utilizing onset time and survival time as endpoints; specifically, 11 out of 18 studies used onset time, while 16 used survival time. Some studies employed measures of motor function (e.g., stride length) and neuron-related indicators as efficacy measures, but the evaluation methods were inconsistent. Three studies reported disease duration in experimental animals as an outcome measure. A remarkable effects of HM in ALS mice, including onset time SMD=1.75, 95% CI (1.14 ~ 2.36), Z = 5.60, P < 0.01), survival time(SMD = 1.42, 95% CI (0.79 ~ 2.04), Z = 4.44, P < 0.01), stride length SMD=1.90, 95% CI (1.21 to 2.59), Z = 5.39, P < 0.01) and duration time MD=6.79, 95% CI [-0.28, 13.87], Z=1.88, P =0.06), showing HM’s certain efficiency in treating ALS mice. Implication This study employed a comprehensive multidimensional systematic evaluation approach to thoroughly investigate the efficacy of HM in treating ALS animal models. Through a comprehensive reanalysis of various outcome indicators, we aimed to provide evidence-based medical support for the clinical application of HM in ALS treatment. This study utilized a rigorous scientific statistical method—meta-analysis—which not only opens new avenues for clinical research on herbal medicine in our country but also promotes the integration of HM with modern evidence-based medicine. Specifically, this study summarized recent research on HM treatment for ALS. Through systematic review, we analyzed the effects of HM on ALS onset time and survival time, identifying its potential efficacy in delaying disease progression and extending survival. However, while the systematic review provided overall efficacy data from preclinical studies, it could not fully elucidate the underlying biological mechanisms. Therefore, we conducted a scoping review and found that HM plays a significant role in immunotherapy for ALS. HM modulates innate and adaptive immune processes by reshaping the blood-brain barrier, inhibiting natural killer cell activity, suppressing complement system activation, regulating microglial cell activity, and restoring T cell function, thereby protecting motor neurons from toxic damage. These mechanisms not only provide a biological explanation for the efficacy observed in the systematic review but also deepen our understanding of the mechanisms through which HM acts in ALS treatment. These findings lay the foundation for further understanding the immunomodulatory mechanisms of HM in ALS treatment. Combining the systematic review with the scoping review allowed us to delve deeper into the immune mechanisms of HM based on quantitative analysis of its efficacy. This dual-pronged research approach not only provides more comprehensive insights into current ALS research but also guides future clinical applications and drug development. The significance of this study lies in its ability to deepen our understanding of HM in ALS treatment while also providing scientific evidence for the integration of modern evidence-based medicine with traditional herbal medicine. By reviewing the immunomodulatory mechanisms of HM in ALS treatment, we provide a theoretical basis and new therapeutic strategies for this refractory disease. Future research should continue to explore the potential role of HM in ALS treatment, aiming to bring more effective treatment options to ALS patients. This comprehensive analysis not only enhances our understanding of ALS treatment methods but also advances the clinical application of HM in ALS, paving the way for research on refractory diseases. Through rigorous scientific research design and multidimensional evaluation methods, we provide a solid foundation for future ALS research and offer important references for the application of HM in modern medicine. Limitations Currently, Trials on the treatment of ALS with HMs are relatively scarce. However, our study faced some limitations, including the limitation of being limited to publicly published literature searches. This means that we may have missed some relevant grey literature that may contain important information on the effectiveness of HM in treating ALS. Second, our search covers only English and Chinese studies, which may introduce a degree of linguistic bias as studies in other languages may exist but are not taken into account. In addition, we need to recognize the existence of publication bias, that is, negative study results are relatively less likely to be published, which may cause our analysis to be influenced by positive studies and overestimate the therapeutic effect. These limitations and biases may lead to varying degrees of bias in our findings. First, the incomplete coverage of the literature may mean that we miss some potentially important studies, resulting in an insufficiently comprehensive overall assessment of the efficacy of HM in the treatment of ALS. Secondly, due to the limited number and varying quality of studies, the results of our analysis may lack robustness and have certain uncertainties. In addition, due to the insufficient sample size, our analysis may lack the statistical power to draw accurate conclusions or generalize to the entire ALS patient population. Therefore, in order to more fully evaluate the efficacy of HM in the treatment of ALS, future studies need to overcome these limitations and biases. This could include expanding literature searches to include studies in grey literature and other languages, as well as strengthening assessments of research quality and controlling the effects of publication bias. At the same time, efforts should be made to increase the sample size and improve the quality of studies to ensure more reliable and robust analysis results and provide more convincing evidence support for the treatment of ALS with HM. Thirdly, A limitation of our study is the lack of detailed exploration of the side effects of HMs. Since our study focused on a preclinical systematic review, the issue of side effects of HM is rarely addressed in the existing literature. Detailed side-effect studies usually need to be performed in clinical studies. Considering the herbal medicine is widely used in the treatment of understanding the potential side effects is very important to ensure patient safety. We expect that future clinical studies will explore this issue in depth and provide more comprehensive and reliable data for systematic evaluation of the safety of Chinese herbal medicines. Conclusion The preclinical evidence supports the utilization of HM as a conventional treatment for ALS mice. Growing evidence indicates that HM may potentially delay neurological degeneration in ALS by activating diverse signaling pathways, especially immune pathways. |
Suction pressure levels during bronchial obstruction are related to bronchoalveolar lavage recovery failure: A clinical trial | ab5935c2-a142-400f-a2ab-ecf6995f2c07 | 11813026 | Surgical Procedures, Operative[mh] | Bronchoalveolar lavage (BAL) is a valuable diagnostic tool for interstitial lung and infectious bronchopulmonary diseases. For effective diagnosis, the BAL recovery rate should exceed 30%. Predictive factors for a low BAL recovery rate include male sex, advanced age, smoking history, chronic obstructive pulmonary disease (COPD), performing BAL in bronchi other than the middle lobe or lingula, and a low forced expiratory volume in 1 second relative to forced vital capacity. However, predicting BAL recovery failure (<30%) remains challenging. Our previous research identified a thin bronchial wall as a predictor of BAL recovery failure, suggesting that bronchial wall weakness may contribute to a reduced recovery rate. The primary cause of BAL recovery failure is bronchial collapse due to the negative pressure in the bronchoscope’s working channel. We hypothesized that suction pressure levels during bronchial collapse might correlate with BAL recovery failure. This study aimed to measure suction pressure levels during bronchial collapse to explore their relationship with BAL recovery rates. 2.1. Study design and setting We prospectively collected data from 103 adult patients (aged ≥ 18 years) who underwent BAL procedures at our hospital from May 2023 to July 2024. The flowchart of the study is shown in Figure . The exclusion criteria were patients whose bronchial lumen could not be confirmed due to sputum or steep bronchial angles, patients whose suction pressure level measurements and BAL procedure involved different bronchi, and patients whose BAL procedures were incomplete due to sputum obstruction of the bronchoscope’s instrumentation channel, wedge dislodgment, transbronchial lung biopsy before BAL, or side effects. Suction pressure levels during bronchial obstruction were measured in all patients. Patients were categorized into a failure group (recovery rate < 30%) or a success group (recovery rate ≥ 30%), and the data were compared between these groups. Additionally, factors correlating suction pressure levels during bronchial collapse with the BAL recovery rate and the area of the bronchial wall, as measured by chest computed tomography (CT) scan, were analyzed. The data collected included suction pressure levels during bronchial collapse, symptoms, laboratory results, radiological findings, and other relevant information. The study was approved by the Institutional Review Board of our hospital (Study number: 24010, IRB approval dates: June 25th, 2024), and all patients provided written informed consent. All methods were carried out in accordance with relevant guidelines and regulations or the Declaration of Helsinki. 2.2. BAL procedure The BAL procedure in this study followed methods similar to those in previous reports. Bronchoscopy was performed under pharyngeal anesthesia using 2% xylocaine solution, along with intravenous premedication with 1 to 5 mg of midazolam as a sedative and/or 17.5 to 35 mg of pethidine as an analgesic, which was administered per routine practice. The premedication doses were determined by the attending physician based on patient needs. The bronchoscope was inserted transorally, and 2% xylocaine solution was administered through the instrumentation channel. Oxygen was provided through a humidified nasal tube during the examination, with oxygen saturation monitored via pulse oximetry. For the BAL procedure, the tip of the bronchoscope was positioned in the wedge location within a lobe/segment/subsegmental bronchus. BAL was performed using 3 aliquots of 50 mL of physiological saline at room temperature. Following a method commonly used in Japan, saline was gradually instilled and then gently suctioned back through the instrumentation channel. The recovery rates from the 50-, 100-, and 150-mL injections were recorded as the recovery rates for the 1st, 2nd, and 3rd aliquots, respectively. 2.3. Measurement of suction pressure levels during bronchial obstruction Suction pressure levels during bronchial obstruction were evaluated after the tip of the bronchoscope was placed in the wedge position within a bronchus and before saline was instilled. The suction pressure was gradually increased from 2 hPa, and the level was recorded when the bronchus fully collapsed (Fig. ). Bronchial collapse was defined as complete obstruction of the bronchial wall, as observed in bronchoscopy images (Fig. C). The attending physician continued to press the suction button during this procedure. The suction pressure levels were not disclosed to the physicians performing the BAL procedure. Three measurements were taken, and the median value was used for analysis. Bronchial collapse was assessed by 2 or more physicians. The reason for starting at 2 hPa was that the increase in suction pressure was not smooth but rather rapid at approximately 5 hPa when starting at 0 hPa. Beginning at 2 hPa, more gradual increases were allowed. None of the patients experienced bronchial collapse at 2 hPa. 2.4. Definition of BAL recovery failure and the area of the bronchial wall The BAL failure group was defined as having a total recovery rate of <30%, which is considered insufficient for an effective diagnosis in patients with interstitial lung disease. The area of the bronchial wall at the target site for BAL was calculated using a SYNAPSE VINCENT volume analyzer on CT images (FUJIFILM Medical Co., Ltd., Tokyo, Japan), as referenced in a previous study. The formula used for calculating the area of the bronchial wall was as follows: area of the bronchial wall = (major axis length of outer diameter) × (minor axis length of outer diameter) × 3.14 ÷ 4 − (area of the bronchial lumen). The outer and inner diameters of the bronchus were measured by lung analysis and were analyzed at 5 points near the target bronchus orifice. The average of these 5 data points was used. 2.5. Statistical methods All of the data were analyzed and processed using EZR, version 1.53. The Mann–Whitney U test and Fisher exact test were employed for comparisons between the BAL failure and success groups. The Kruskal–Wallis test was used for comparisons among 3 or more groups. Spearman correlation analysis was conducted to identify relationships among the different variables. The sensitivity, specificity, and odds ratios were calculated, and a receiver operating characteristic (ROC) curve was constructed to determine the cutoff values. The level of statistical significance was set at P = .05 (2-tailed). We prospectively collected data from 103 adult patients (aged ≥ 18 years) who underwent BAL procedures at our hospital from May 2023 to July 2024. The flowchart of the study is shown in Figure . The exclusion criteria were patients whose bronchial lumen could not be confirmed due to sputum or steep bronchial angles, patients whose suction pressure level measurements and BAL procedure involved different bronchi, and patients whose BAL procedures were incomplete due to sputum obstruction of the bronchoscope’s instrumentation channel, wedge dislodgment, transbronchial lung biopsy before BAL, or side effects. Suction pressure levels during bronchial obstruction were measured in all patients. Patients were categorized into a failure group (recovery rate < 30%) or a success group (recovery rate ≥ 30%), and the data were compared between these groups. Additionally, factors correlating suction pressure levels during bronchial collapse with the BAL recovery rate and the area of the bronchial wall, as measured by chest computed tomography (CT) scan, were analyzed. The data collected included suction pressure levels during bronchial collapse, symptoms, laboratory results, radiological findings, and other relevant information. The study was approved by the Institutional Review Board of our hospital (Study number: 24010, IRB approval dates: June 25th, 2024), and all patients provided written informed consent. All methods were carried out in accordance with relevant guidelines and regulations or the Declaration of Helsinki. The BAL procedure in this study followed methods similar to those in previous reports. Bronchoscopy was performed under pharyngeal anesthesia using 2% xylocaine solution, along with intravenous premedication with 1 to 5 mg of midazolam as a sedative and/or 17.5 to 35 mg of pethidine as an analgesic, which was administered per routine practice. The premedication doses were determined by the attending physician based on patient needs. The bronchoscope was inserted transorally, and 2% xylocaine solution was administered through the instrumentation channel. Oxygen was provided through a humidified nasal tube during the examination, with oxygen saturation monitored via pulse oximetry. For the BAL procedure, the tip of the bronchoscope was positioned in the wedge location within a lobe/segment/subsegmental bronchus. BAL was performed using 3 aliquots of 50 mL of physiological saline at room temperature. Following a method commonly used in Japan, saline was gradually instilled and then gently suctioned back through the instrumentation channel. The recovery rates from the 50-, 100-, and 150-mL injections were recorded as the recovery rates for the 1st, 2nd, and 3rd aliquots, respectively. Suction pressure levels during bronchial obstruction were evaluated after the tip of the bronchoscope was placed in the wedge position within a bronchus and before saline was instilled. The suction pressure was gradually increased from 2 hPa, and the level was recorded when the bronchus fully collapsed (Fig. ). Bronchial collapse was defined as complete obstruction of the bronchial wall, as observed in bronchoscopy images (Fig. C). The attending physician continued to press the suction button during this procedure. The suction pressure levels were not disclosed to the physicians performing the BAL procedure. Three measurements were taken, and the median value was used for analysis. Bronchial collapse was assessed by 2 or more physicians. The reason for starting at 2 hPa was that the increase in suction pressure was not smooth but rather rapid at approximately 5 hPa when starting at 0 hPa. Beginning at 2 hPa, more gradual increases were allowed. None of the patients experienced bronchial collapse at 2 hPa. The BAL failure group was defined as having a total recovery rate of <30%, which is considered insufficient for an effective diagnosis in patients with interstitial lung disease. The area of the bronchial wall at the target site for BAL was calculated using a SYNAPSE VINCENT volume analyzer on CT images (FUJIFILM Medical Co., Ltd., Tokyo, Japan), as referenced in a previous study. The formula used for calculating the area of the bronchial wall was as follows: area of the bronchial wall = (major axis length of outer diameter) × (minor axis length of outer diameter) × 3.14 ÷ 4 − (area of the bronchial lumen). The outer and inner diameters of the bronchus were measured by lung analysis and were analyzed at 5 points near the target bronchus orifice. The average of these 5 data points was used. All of the data were analyzed and processed using EZR, version 1.53. The Mann–Whitney U test and Fisher exact test were employed for comparisons between the BAL failure and success groups. The Kruskal–Wallis test was used for comparisons among 3 or more groups. Spearman correlation analysis was conducted to identify relationships among the different variables. The sensitivity, specificity, and odds ratios were calculated, and a receiver operating characteristic (ROC) curve was constructed to determine the cutoff values. The level of statistical significance was set at P = .05 (2-tailed). This study included 103 patients whose median suction pressure level during bronchial obstruction was 10 hPa (range 3–22). For each patient, the median difference between the minimum and maximum suction pressure levels during bronchial obstruction was 2 hPa (95% confidence interval [CI]: 1.56–2.44). In 53 of the 103 patients (51.5%), the suction pressure levels during bronchial obstruction remained consistent across 2 or 3 of the 3 measurements. The baseline characteristics of the patients are shown in Table . Thirteen patients (12.6%) were classified into the failure group, whereas 90 patients (87.4%) were in the success group. The median age of the patients in the failure group was 74 years (range 47–86), with 7 males (53.8%), which was not significantly different from that of patients in the success group (median age 72 years [range 18–90], P = .290; male n = 56 [62.2%], P = .560). Patients who underwent BAL at sites other than the middle/lingual lobe were significantly more common in the failure group than in the success group (n = 5 [38.5%] vs n = 11 [12.2%], P = .029). The area of the bronchial wall in the failure group was smaller than that in the success group (median 10.4 mm 2 [range 4.4–12.6] vs median 14.2 mm 2 [4.6–17.1], P = .001). No significant differences were observed in smoking history ( P = .877), the presence of COPD ( P = .565), or other factors. Figure shows the comparison of suction pressure levels during bronchial obstruction between the failure and success groups, revealing that patients in the failure group had lower suction pressure levels during bronchial obstruction than did those in the success group (median 8 hPa [95% Cl: 3–13] vs 10 hPa [4–22], P < .001). The correlation between suction pressure levels during bronchial obstruction and the BAL recovery rate is illustrated in Figure . Although there was no significant relationship between the BAL recovery rate and suction pressure level ( R = 0.145, P = .143) (Fig. A), a positive correlation was observed between the area of the bronchial wall and suction pressure level ( R = 0.256, P = .010) (Fig. B). Additionally, there was no significant difference in suction pressure levels between the middle/lingual lobe and other BAL target sites (median 10 hPa [range 4–22] vs 9 hPa [range 3–22], P = .122). Figure presents the ROC curve for suction pressure levels during bronchial obstruction in predicting BAL recovery failure. The area under the ROC curve was 0.807 (95% CI: 0.687–0.927). With a cutoff value of <9.5 hPa, the sensitivity was 67.8%, the specificity was 92.3%, and the odds ratio was 24.5 (95% CI: 3.34–1089.8). No patients experienced adverse events following the measurement of suction pressure levels. Additionally, only 1 patient who was excluded from the study could not complete the measurement of suction pressure levels because of the development of mucosal bleeding. However, this patient was still able to undergo the BAL procedure without complications. This study demonstrated that suction pressure levels during bronchial obstruction were related to predicting BAL recovery rate failure. A cutoff value of 9.5 hPa or more for suction pressure levels during bronchial obstruction was identified as predictive of BAL recovery success, with a sensitivity of 67.8% and a specificity of 92.3%. Additionally, a positive relationship was found between suction pressure levels during bronchial obstruction and the area of the bronchial wall. In our previous study, we hypothesized that bronchial wall weakness is related to the BAL recovery rate, and the results of this study strengthen the evidence of that. The common cause of BAL recovery failure is bronchial collapse ; therefore, a weaker bronchial wall may lead to easier bronchial obstruction under suction pressure. In addition to bronchial wall weakness, previous studies, including our previous research, reported that COPD, male sex, and age are related to the BAL recovery rate. In particular, COPD or aging can induce a loss of lung elastic recoil and increased compliance, which might be associated with bronchial wall weakness. Although these bronchial changes could lead to a low BAL recovery rate, COPD, male sex, and age were not significantly related to BAL recovery failure in this study. On the other hand, performing BAL at a site other than the middle/lingual lobe was a predictive factor for BAL recovery failure, which is consistent with the findings of previous studies. However, there was no significant difference in the suction pressure levels during bronchial obstruction between the middle/lingual lobe and other sites. In our previous study, the target site of the BAL did not correlate with the area of the bronchial wall. Current guidelines suggest selecting the target site based on thin-slice CT rather than defaulting to the middle/lingual lobe, although this evidence is not fully established. However, it is generally considered that the effect of gravity when performing a BAL at the middle/lingual lobe might facilitate easier BAL fluid recovery in patients in the supine position. Therefore, we identified 2 predictive factors for BAL recovery failure: suction pressure levels during bronchial obstruction and the target site of BAL. Low BAL recovery rates may not only lead to inaccurate diagnoses but also increase the risk of adverse events. Consequently, it is advisable to avoid performing BAL in bronchi with low suction pressure levels during bronchial obstruction and in bronchi other than the middle/lingual lobe whenever possible. The adverse events associated with measuring the suction pressure levels during bronchial obstruction were minimal, with only 1 patient experiencing a minor side effect of mucosal bleeding. This finding indicates that the procedure is very safe. However, generalizing this method is challenging. Only 1 patient with BAL recovery failure had a suction pressure of 9.5 hPa or more during bronchial obstruction, suggesting that high suction pressure levels could predict a BAL recovery rate of 30% or more. Conversely, 32.2% of patients with suction pressure levels below 9.5 hPa still achieved a high BAL recovery rate. This discrepancy may be due to the physical difference between gas and liquid. For example, liquids have higher density and viscosity than air does, and they tend to become more turbulent when negative pressure is applied. These characteristics might complicate the generalization of this method. Although further studies are needed, we believe that this study represents a significant step forward in predicting BAL recovery. This investigation has several limitations. First, the study was conducted at a single center. The determination of bronchial obstruction for measuring suction pressure levels was dependent on the researcher, potentially introducing bias. However, since BAL practitioners and researchers differ, we believe that this bias is minimal. Additionally, the area of the bronchial wall could not be analyzed for 1 patient using the SYNAPSE VINCENT volume analyzer. The attending physician selected the target site for performing BAL. BAL was primarily performed on the middle/lingual lobe if abnormalities were present on CT or on the lobe with abnormalities if the middle/lingual lobe had no abnormal lesions. This study did not analyze the role of the handling physician (resident or senior doctor); however, our previous study demonstrated that the handling physician was not significantly related to the BAL recovery rate. This study demonstrated that suction pressure levels during bronchial obstruction were related to the BAL recovery rate failure, suggesting that a weak bronchial wall may be more prone to collapse under suction pressure. During the preparation of this work, the author(s) used ChatGPT4.0 for English proofreading. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication. Conceptualization: Masafumi Shimoda. Data curation: Masafumi Shimoda, Tatsuya Kodama, Masashi Ito, Kozo Morimoto, Kozo Yoshimori, Yoshiaki Tanaka. Formal analysis: Masafumi Shimoda. Investigation: Masafumi Shimoda, Tatsuya Kodama, Masashi Ito. Methodology: Masafumi Shimoda. Project administration: Ken Ohta. Software: Masafumi Shimoda. Supervision: Masafumi Shimoda, Yoshiaki Tanaka. Visualization: Masafumi Shimoda. Writing – original draft: Masafumi Shimoda. Writing – review & editing: Tatsuya Kodama, Masashi Ito, Yoshiaki Tanaka. |
The state and consideration for skin test of β-lactam antibiotics in pediatrics | 4a607d45-55d8-43a7-b988-d6ecdb998cee | 10308085 | Pediatrics[mh] | Introduction β-lactam antibiotics are the most frequently used drugs and the most common drugs that cause allergic reactions in pediatrics; death could occur in severe cases due to anaphylactic shock. Penicillins and cephalosporins are the two main and most used β-lactam antibiotics, especially in children . A β-lactam antibiotic skin test is widely used to predict anaphylactic reactions before medication in pediatrics . However, most patients with suspected hypersensitivity reactions to β-lactam antibiotics could tolerate antibiotics. Positive results of skin tests were more often encountered in pediatrics than in adults . There has been controversy over whether β-lactam antibiotics should be tested for skin allergy before medication in children and a lack of unified standards and guidelines in the clinical operation of β-lactam antibiotic skin tests. The primary aim of this review was to determine whether β-lactam antibiotics should be tested for skin allergy before application in children by analyzing the mechanism and reasons for anaphylaxis to β-lactam antibiotics, the significance of β-lactam antibiotic skin tests, the current state of β-lactam antibiotic skin tests at home and abroad, and the problems of domestic and international skin tests. The mechanism of β-lactam antibiotics allergic reactions 2.1 The mechanism of drug hypersensitivity reactions Drug hypersensitivity reactions (DHRs) are mediated by the immune system after exposure to drugs. Based on immunologic mechanisms, the Gell and Coombs classification divides them into four categories. Type I (immediate hypersensitivity) is mediated by IgE specific for allergens and occurs usually within a few minutes to an hour after administration; typical clinical manifestations include urticaria, angioneurotic edema, bronchospasm, and anaphylactic shock. Type II is characterized by antigen–antibody interactions, of which the vasculitides are classic examples. Type III is mediated by immune complexes, whose typical clinical manifestations include serum disease and drug-associated vasculitis. The clinical manifestations of Type IV hypersensitivity reactions, mediated by T cells, include eosinophilia, systemic symptom syndrome, Stevens–Johnson syndrome, and so on . β-lactam antibiotic reactions are defined as immediate reactions (IR) or non-immediate reactions (NIR) based on the time interval from the last dose to the onset of symptoms . IR occurs within 1 h after the last dose administration. The clinical manifestations of IR include urticaria and severe anaphylaxis . NIR occurs more than one hour after the last dose administration and up to several hours or days . The clinical manifestations of NIR include urticaria, angioedema, and maculopapular exanthema. 2.2 The mechanism of penicillin allergy The chemical structure of penicillin contains a β-lactam ring, a tetrahydrothiazole ring, and an R side chain. In vivo , the products of penicillin metabolism bind to self-proteins, resulting in allergic reactions . The reactive products of penicillin metabolism, also termed antigenic determinants, are classified into major and minor determinants. Benzyl penicilloyl (95%) is considered the major determinant, and other products (5%) include penicilloate, penicillanyl, and penicillenate, and others are considered minor determinants . It is the basic principle of skin testing and avoiding administration if a severe anaphylactic IgE reaction is observed. The major determinant (benzylpenicilloyl polylysine, PPL) is recommended as the ideal skin test reagent. The most significant determinants include benzylpenicillin (penicillin G), benzylpenicilloate, and benzylpenilloate, as well as ampicillin or amoxicillin , but there are no standardized reagents that contain all major and minor penicillin determinants commercially . 2.3 The mechanism of cephalosporins allergy The chemical structure of cephalosporins contains β-lactam ring, a six-membered dihydrothiazine ring, and R1 and R2 side chains, which differ from penicillins in the six-membered dihydrothiazine ring and R2 side chain. During the degradation of cephalosporins, the β-lactam ring, dihydrothiazine ring, and R2 side chain were disrupted, while the R1 side chain may remain undamaged. Unlike penicillins, for which the antigenic determinants are definite, the antigenic determinants of cephalosporins have not been clear and definite. In addition, cephalosporins’ efficiency in forming hapten protein conjugates is inefficient compared to penicillin. Some evidence supports the idea that the degradation of the β-lactam ring destroys the R2 side chain, resulting in unstable conjugates and deficiently identified determinants. The remaining β-lactam moiety and R1 side chain, which can link to host proteins covalently, are central to immune and allergic reactions . 2.4 Cross-reactivity in β-lactam allergy The structure of all β-lactam antibiotics includes β-lactam ring, and the structure of penicillin has a thiazolidine ring. Different side chains distinguish different penicillins. Unlike the thiazolidine ring of penicillins, cephalosporins have a dihydrothiazine ring and R1 and R2 side chains, which distinguish different cephalosporins . During the drug metabolism of cephalosporins, the R1 side chain may remain intact, which can induce cross-reactivity with penicillins. Some evidence supports the idea that cross-reactivity between penicillins and cephalosporins primarily depends on whether their R1 side chain have a similar structure rather than the similarity of the β-lactam ring . A meta-analysis of studies indicated that first-generation cephalosporins increased anaphylactic reactions significantly, while there was no increase with second- and third-generation cephalosporins. According to a review of the cross-reactivity of β-lactam antibiotics with anaphylactic reactions . The prevalence of cross-reactivity between penicillins and cephalosporins was rare, and the occurrence of cross-reactivity was due to the similar structure of the R1 side chain. Patients with anaphylactic reactions to penicillins could be treated by administration of cefuroxime and ceftriaxone, whose side chains differ from those of penicillins . In addition, prospective studies demonstrated that cross-reactivity of penicillins and cephalosporins with monobactams and carbapenems was scarce , except for ceftazidime, which had the same R1 side chain as aztreonam . The mechanism of drug hypersensitivity reactions Drug hypersensitivity reactions (DHRs) are mediated by the immune system after exposure to drugs. Based on immunologic mechanisms, the Gell and Coombs classification divides them into four categories. Type I (immediate hypersensitivity) is mediated by IgE specific for allergens and occurs usually within a few minutes to an hour after administration; typical clinical manifestations include urticaria, angioneurotic edema, bronchospasm, and anaphylactic shock. Type II is characterized by antigen–antibody interactions, of which the vasculitides are classic examples. Type III is mediated by immune complexes, whose typical clinical manifestations include serum disease and drug-associated vasculitis. The clinical manifestations of Type IV hypersensitivity reactions, mediated by T cells, include eosinophilia, systemic symptom syndrome, Stevens–Johnson syndrome, and so on . β-lactam antibiotic reactions are defined as immediate reactions (IR) or non-immediate reactions (NIR) based on the time interval from the last dose to the onset of symptoms . IR occurs within 1 h after the last dose administration. The clinical manifestations of IR include urticaria and severe anaphylaxis . NIR occurs more than one hour after the last dose administration and up to several hours or days . The clinical manifestations of NIR include urticaria, angioedema, and maculopapular exanthema. The mechanism of penicillin allergy The chemical structure of penicillin contains a β-lactam ring, a tetrahydrothiazole ring, and an R side chain. In vivo , the products of penicillin metabolism bind to self-proteins, resulting in allergic reactions . The reactive products of penicillin metabolism, also termed antigenic determinants, are classified into major and minor determinants. Benzyl penicilloyl (95%) is considered the major determinant, and other products (5%) include penicilloate, penicillanyl, and penicillenate, and others are considered minor determinants . It is the basic principle of skin testing and avoiding administration if a severe anaphylactic IgE reaction is observed. The major determinant (benzylpenicilloyl polylysine, PPL) is recommended as the ideal skin test reagent. The most significant determinants include benzylpenicillin (penicillin G), benzylpenicilloate, and benzylpenilloate, as well as ampicillin or amoxicillin , but there are no standardized reagents that contain all major and minor penicillin determinants commercially . The mechanism of cephalosporins allergy The chemical structure of cephalosporins contains β-lactam ring, a six-membered dihydrothiazine ring, and R1 and R2 side chains, which differ from penicillins in the six-membered dihydrothiazine ring and R2 side chain. During the degradation of cephalosporins, the β-lactam ring, dihydrothiazine ring, and R2 side chain were disrupted, while the R1 side chain may remain undamaged. Unlike penicillins, for which the antigenic determinants are definite, the antigenic determinants of cephalosporins have not been clear and definite. In addition, cephalosporins’ efficiency in forming hapten protein conjugates is inefficient compared to penicillin. Some evidence supports the idea that the degradation of the β-lactam ring destroys the R2 side chain, resulting in unstable conjugates and deficiently identified determinants. The remaining β-lactam moiety and R1 side chain, which can link to host proteins covalently, are central to immune and allergic reactions . Cross-reactivity in β-lactam allergy The structure of all β-lactam antibiotics includes β-lactam ring, and the structure of penicillin has a thiazolidine ring. Different side chains distinguish different penicillins. Unlike the thiazolidine ring of penicillins, cephalosporins have a dihydrothiazine ring and R1 and R2 side chains, which distinguish different cephalosporins . During the drug metabolism of cephalosporins, the R1 side chain may remain intact, which can induce cross-reactivity with penicillins. Some evidence supports the idea that cross-reactivity between penicillins and cephalosporins primarily depends on whether their R1 side chain have a similar structure rather than the similarity of the β-lactam ring . A meta-analysis of studies indicated that first-generation cephalosporins increased anaphylactic reactions significantly, while there was no increase with second- and third-generation cephalosporins. According to a review of the cross-reactivity of β-lactam antibiotics with anaphylactic reactions . The prevalence of cross-reactivity between penicillins and cephalosporins was rare, and the occurrence of cross-reactivity was due to the similar structure of the R1 side chain. Patients with anaphylactic reactions to penicillins could be treated by administration of cefuroxime and ceftriaxone, whose side chains differ from those of penicillins . In addition, prospective studies demonstrated that cross-reactivity of penicillins and cephalosporins with monobactams and carbapenems was scarce , except for ceftazidime, which had the same R1 side chain as aztreonam . The significance of skin test 3.1 Penicillin skin test Approximately 5% of children report a history of penicillin allergy. However, only a minority of these children were allergic. Due to the fact that penicillin allergy history had a poor prediction of reactivity, skin testing was key to identifying whether patients could be treated with penicillin safely . The penicillin skin test was the fastest, most sensitive, and most economical method to predict penicillin type I allergic reactions in children . The standard penicillin skin test has a negative predictive value of 97%–99%, and reagents include major determinants, minor determinants, positive controls, and negative controls. Due to the lack of availability of major and minor determinant test reagents, the penicillin skin test is usually performed with diluted penicillin G . 3.2 Cephalosporin skin test Unlike penicillins, where the antigenic determinants are stable and definite , anaphylaxis to cephalosporins may occur due to unique antigenic determinants of cephalosporins or antigenic determinants that are shared with other β-lactam antibiotics infrequently, particularly penicillins. Given this reason, parent drugs are recommended as skin test reagents in addition to the classic benzylpenicillin reagents and semisynthetic penicillins . Although the cephalosporin skin test was less valuable than the penicillin skin test and had not been well validated, it had a good negative predictive value with different R1 side chains of cephalosporins. The ideal concentration for the cephalosporin skin test reagent has not been apparent strictly, and the association of the negative predictive value of the skin test with immediate hypersensitivity is uncertain . There were few research data available on the predictive values of skin tests for cephalosporins . Penicillin skin test Approximately 5% of children report a history of penicillin allergy. However, only a minority of these children were allergic. Due to the fact that penicillin allergy history had a poor prediction of reactivity, skin testing was key to identifying whether patients could be treated with penicillin safely . The penicillin skin test was the fastest, most sensitive, and most economical method to predict penicillin type I allergic reactions in children . The standard penicillin skin test has a negative predictive value of 97%–99%, and reagents include major determinants, minor determinants, positive controls, and negative controls. Due to the lack of availability of major and minor determinant test reagents, the penicillin skin test is usually performed with diluted penicillin G . Cephalosporin skin test Unlike penicillins, where the antigenic determinants are stable and definite , anaphylaxis to cephalosporins may occur due to unique antigenic determinants of cephalosporins or antigenic determinants that are shared with other β-lactam antibiotics infrequently, particularly penicillins. Given this reason, parent drugs are recommended as skin test reagents in addition to the classic benzylpenicillin reagents and semisynthetic penicillins . Although the cephalosporin skin test was less valuable than the penicillin skin test and had not been well validated, it had a good negative predictive value with different R1 side chains of cephalosporins. The ideal concentration for the cephalosporin skin test reagent has not been apparent strictly, and the association of the negative predictive value of the skin test with immediate hypersensitivity is uncertain . There were few research data available on the predictive values of skin tests for cephalosporins . The state of skin test in pediatrics The routine skin test was not required before using β-lactam antibiotics in European and American countries; it was only carried out in China. In China, routine skin tests for cephalosporins had been canceled, but penicillin skin tests were still carried out at present for both adults and children. If penicillin was stopped for more than 72 h, the skin test should be repeated . In European and American countries, penicillin skin tests were only performed on patients with a history of allergies who needed penicillin . Since few studies have been performed on children, skin testing in the pediatric population has not been standardized. The guidelines, which can diagnose drug allergies in adults, were generally applied to pediatrics . When the results of the skin test are positive, the patients are hypersensitive to the tested drug, and the administration is suspended . In the past several years, the accuracy of skin tests has been questioned and discussed in some studies , and these studies highlighted that the diagnostic value of skin tests was not optimal in children. There are many diagnostic shortages in skin tests in children, such as low sensitivity and positive predictive value (PPV), especially for mild skin reactions . A study indicated that skin tests could be false positives in 80% of cases, leading to the unnecessary avoidance of drugs . A study indicated that higher concentrations of reagent, large injection volumes, and hidden additives or irritant effects could lead to false-positive results . In addition, due to the personal characteristics of the pediatricians, discomfort often occurred during the process of skin testing, which led to the expansion of the redness area. Skin tests in pediatrics, similar to adult studies, show a high negative predictive value (NPV), but a positive result might prevent the use of drugs because some studies confirmed a higher rate of false positives . The positive result of a skin test was still used to diagnose anaphylaxis in clinical practice, despite some reports of a low PPV of skin tests in children . In addition, low-efficiency, resource-intensive, and painful methods may limit the use of skin tests in children , as a study indicated that the prescription costs were much higher in patients with labeled penicillin allergies . Although the negative predictive value (NPV) of skin tests is high in both children and adults, some patients can experience an anaphylactic reaction after a negative result . Two studies investigated the mild immediate and nonimmediate reactions to amoxicillin in children. There was a significant false-negative rate with the standard penicillin skin test in children. In infants and young children, skin reactivity is poor, and false-negative results may occur. In addition, some drugs can suppress anaphylactic reactions, leading to false negative results. We need to make sure of our medication history before a skin test. Relatively few studies have evaluated the sensitivity and specificity of cephalosporin skin tests in patients with allergic reactions to cephalosporins. The prediction value of the cephalosporin skin test before administration in anaphylaxis is not supported by sufficient evidence-based medical evidence . Although conventional skin testing before administration of cephalosporins is not recommended, skin testing should be done in the following cases: Patients with a specific history of type I (immediate) allergy reactions to penicillin or cephalosporin, if it is necessary to use cephalosporins clinically for the patients, after obtaining the informed consent of the patient, should choose a cephalosporin with a side chain different from that of the allergy drug and the skin test results have certain reference values. Skin testing should be done when it is required in drug instructions . Skin testing is a painful method and difficult to interpret for children, especially infants. A false-positive result may increase the number of children suspected of having allergies to limit the use of antibiotics. The accuracy of skin tests in the allergic evaluation of suspected β-lactam allergic reactions has been highly debated recently . In patients with suspected β-lactam antibiotic allergy reactions, non-β-lactam drugs, or desensitization are commonly used when alternative medicine is unavailable. Unfortunately, drug-resistant, resource-wasting, less effective, and more adverse reactions may occur when using alternative medicine or broad-spectrum antimicrobial agents, so all patients suspected of β-lactam allergy should be evaluated carefully. More accurate allergy tests at present 5.1 Drug provocation test Drug provocation test (DPT) is the method of administering a drug under controlled conditions to confirm whether there is an allergic reaction to the drug and whether the patient can tolerate the drug or not. The current data emphasize the accuracy of direct DPT in children with NIR and even potentially with IR, which is considered low risk. In some studies, only 3.4%–14% of children with a history of mild NIR had positive DPT and mild reactions. It is increasingly reported that direct DPT in children with a history of mild IR to β-lactam may be safe . Accurate diagnosis of β-lactam anaphylactic reactions in children is often based on DPT . In the last few years, direct DPT procedures without prior skin testing have gained acceptance as a safe and accurate strategy for patients . According to the international consensus guidelines, skin testing is recommended as a first-line test for immediate reactions to drug allergies. If the result of the skin test is negative, DPT, as the current gold standard for diagnosis, is performed to confirm or exclude the presence of an allergy to the drug , although no standardized protocols exist so far . Multiple studies supported the use of direct DPT without prior skin testing for pediatric and adult populations who were historically labeled with anaphylaxis to β-lactam antibiotics . Serious adverse events due to DPT were also infrequent . Some studies indicated that the false labeling of β-lactam anaphylactic reactions could be attributed to the virus infection . Studies reported that direct DPT in children with a history of β-lactam anaphylaxis may be a safe and accurate strategy . A study evaluated the frequency of severe adverse reactions after a direct DPT in patients with reported historical allergies to penicillin or other β-lactam antibiotics . The result of the study indicated that severe reactions due to DPT are infrequent and the superior safety of the DPT method supports its application in the diagnosis of penicillin anaphylaxis to contribute to ensuring the correct use of antibiotics, minimizing drug-induced risks, and improving clinical treatment outcomes. However, DPT reproduces not only hypersensitivity symptoms but also any other adverse clinical manifestation. Some patients do not like to be re-exposed to the drug. Thus, DPT may be harmful and should only be considered after balancing the risk–benefit ratio for the individual patient . In addition, the PPV of DPT may be lower than expected. Thus, a second DPT is suggested to be performed within a few weeks or months. A study suggested that the allergic result should be confirmed with a second DPT within a few weeks or months to remove false labeling of allergies and ensure the safe use of drugs . 5.2 Oral provocation test The oral provocation test (oral challenge) is the method to determine whether a patient is allergic to the drug or not. A systematic review found two studies reporting a positive predictive value of skin tests in children of 36% and 33%, respectively. A skin test could lead to an inaccurate diagnosis. An oral provocation test was finally needed to confirm tolerance in most of these children. In immediate and non-immediate reactions, the gold standard procedure to determine acute β-lactam tolerance was the oral provocation test. Oral challenge used a therapeutic β-lactam dose and at least 1 h of observation; it was costly and time-consuming . In the case of mild non-immediate reactions in children, skin tests were less commonly used, and oral provocation tests were a safe procedure . The oral provocation test is formally contraindicated if there is a history of severe cutaneous adverse reactions . In some studies, the evaluation of the direct oral provocation test was performed excluding high-risk patients . The oral provocation test is considered accurate with high positive and negative predictive values. A direct oral provocation test without a previous skin test has been increasingly used in patients, especially children with a history of mild, non-immediate reactions to β-lactam. In the case of mild non-immediate reactions in children, skin tests were less common and oral provocation tests were a safe procedure . A study evaluated 119 children with a history of mild, non-immediate cutaneous reactions induced by β-lactam through direct oral provocation. Only four (3.4%) reacted with urticaria during oral provocation, and there was no severe reaction . Further studies, including those of various populations and age groups, are needed to enable a stronger recommendation in this regard. Drug provocation test Drug provocation test (DPT) is the method of administering a drug under controlled conditions to confirm whether there is an allergic reaction to the drug and whether the patient can tolerate the drug or not. The current data emphasize the accuracy of direct DPT in children with NIR and even potentially with IR, which is considered low risk. In some studies, only 3.4%–14% of children with a history of mild NIR had positive DPT and mild reactions. It is increasingly reported that direct DPT in children with a history of mild IR to β-lactam may be safe . Accurate diagnosis of β-lactam anaphylactic reactions in children is often based on DPT . In the last few years, direct DPT procedures without prior skin testing have gained acceptance as a safe and accurate strategy for patients . According to the international consensus guidelines, skin testing is recommended as a first-line test for immediate reactions to drug allergies. If the result of the skin test is negative, DPT, as the current gold standard for diagnosis, is performed to confirm or exclude the presence of an allergy to the drug , although no standardized protocols exist so far . Multiple studies supported the use of direct DPT without prior skin testing for pediatric and adult populations who were historically labeled with anaphylaxis to β-lactam antibiotics . Serious adverse events due to DPT were also infrequent . Some studies indicated that the false labeling of β-lactam anaphylactic reactions could be attributed to the virus infection . Studies reported that direct DPT in children with a history of β-lactam anaphylaxis may be a safe and accurate strategy . A study evaluated the frequency of severe adverse reactions after a direct DPT in patients with reported historical allergies to penicillin or other β-lactam antibiotics . The result of the study indicated that severe reactions due to DPT are infrequent and the superior safety of the DPT method supports its application in the diagnosis of penicillin anaphylaxis to contribute to ensuring the correct use of antibiotics, minimizing drug-induced risks, and improving clinical treatment outcomes. However, DPT reproduces not only hypersensitivity symptoms but also any other adverse clinical manifestation. Some patients do not like to be re-exposed to the drug. Thus, DPT may be harmful and should only be considered after balancing the risk–benefit ratio for the individual patient . In addition, the PPV of DPT may be lower than expected. Thus, a second DPT is suggested to be performed within a few weeks or months. A study suggested that the allergic result should be confirmed with a second DPT within a few weeks or months to remove false labeling of allergies and ensure the safe use of drugs . Oral provocation test The oral provocation test (oral challenge) is the method to determine whether a patient is allergic to the drug or not. A systematic review found two studies reporting a positive predictive value of skin tests in children of 36% and 33%, respectively. A skin test could lead to an inaccurate diagnosis. An oral provocation test was finally needed to confirm tolerance in most of these children. In immediate and non-immediate reactions, the gold standard procedure to determine acute β-lactam tolerance was the oral provocation test. Oral challenge used a therapeutic β-lactam dose and at least 1 h of observation; it was costly and time-consuming . In the case of mild non-immediate reactions in children, skin tests were less commonly used, and oral provocation tests were a safe procedure . The oral provocation test is formally contraindicated if there is a history of severe cutaneous adverse reactions . In some studies, the evaluation of the direct oral provocation test was performed excluding high-risk patients . The oral provocation test is considered accurate with high positive and negative predictive values. A direct oral provocation test without a previous skin test has been increasingly used in patients, especially children with a history of mild, non-immediate reactions to β-lactam. In the case of mild non-immediate reactions in children, skin tests were less common and oral provocation tests were a safe procedure . A study evaluated 119 children with a history of mild, non-immediate cutaneous reactions induced by β-lactam through direct oral provocation. Only four (3.4%) reacted with urticaria during oral provocation, and there was no severe reaction . Further studies, including those of various populations and age groups, are needed to enable a stronger recommendation in this regard. Conclusion β-lactam antibiotics, including penicillin and cephalosporin, are common causes of drug hypersensitivity reactions in children. The β-lactam antibiotic skin test is widely used to predict anaphylactic reactions before medication. However, multiple studies highlighted the suboptimal diagnostic value of skin tests in children; positive results of skin tests were more often encountered in pediatrics than in adults. In fact, most children with reported β-lactam allergies are not allergic, which leads to the use of broad-spectrum antibiotics, additional costs, and significantly increased drug resistance and complications. Given the limitations of β-lactam antibiotic skin tests, drug provocation tests, and oral challenges, these were the current standards in the management of pediatric β-lactam allergies because there are no standardized protocols at present. Direct drug provocation tests and oral challenges by skipping skin tests in appropriate patients were gaining acceptance as delabeling strategies. These strategies would learn from skin tests in mutual complementarity. LZ and WL provided ideas for the manuscript and reviewed the manuscript. CG consulted references and wrote the manuscript. BM provided advice for further modifications to this manuscript. All authors contributed to the article and approved the submitted version. |
Apexification of an Endodontically Failed Permanent Tooth with an Open Apex: A Case Report with Histologic Findings | 90e08eeb-ae98-4d85-8bc8-c44d19bc0076 | 11857209 | Dentistry[mh] | Traumatic injuries to permanent teeth may result in damage to the periodontium, adjacent bone, and the neurovascular supply of the pulp. The outcome of the compromised pulp will be dictated by the natural balance between cellular ingrowth and bacterial infiltration, resulting in either sterile necrosis, infection-induced necrosis, revascularization, or regeneration of the injured pulp . A significant consequence of developing pulp necrosis in a traumatized immature tooth is the cessation of root growth. This occurrence will result in thin, fragile dentinal walls, complicating appropriate debridement and optimal apical sealing with conventional endodontic treatment procedures . The management of such cases is considered to be challenging for the dental professionals, necessitating different approaches. Traditionally, the apexification procedure served as a treatment modality to either induce the formation of an apical barrier or continue the development of an immature apex . For an extended period of time, apexification entails the application of calcium hydroxide (Ca[OH] 2 ) paste to achieve root-end closure, which was subsequently followed by root canal therapy . This long-term therapy presents several disadvantages, such as challenges in patient follow-up, inconsistency in process of apical closure, and compromised tooth structure, which increases the risk of root fracture . Subsequently, mineral trioxide aggregate (MTA), a calcium silicate-based hydrophilic cement, was introduced to the area of endodontics by Torabinejad and colleagues. This material demonstrated biocompatibility, induced odontoblastic development, exhibited antibacterial properties, possessed low solubility, and expanded upon setting; hence, MTA emerged as the preferred material for apexification by facilitating the placement of an artificial apical plug to encourage apical-end closure . Nevertheless, the MTA possesses hydrophilic characteristics that necessitate moisture for the setting process, along with prolonged setting times extended up to 3 h and handling challenges, prompting the exploration of alternate materials . Subsequent members of the calcium silicate-based materials were introduced to address these issues, including Biodentine ™ (Septodont, Saint-Maur-des-Fosses, France), iRoot BP Plus (Innovative BioCeramix, Vancouver, BC, Canada), TotalFill ® BC RRM ™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland), among various other brands. These materials have decreased the setting time to an average of 9–12 min, hence eliminating the two-step obturation procedure . Consequently, such materials were utilized in apexification situations. Regenerative endodontic treatment (RET) is a treatment modality that has been implemented in recent years to address the condition of properly selected cases of immature permanent teeth with necrotic pulp. This treatment aims to revitalize the damaged tissues within the canal space and facilitate the maturation of the root as well as thickening the dentinal walls by hard tissue deposition . RET is founded on a tissue bioengineering paradigm that incorporates four critical components: stem cells, scaffolds, bioactive growth factors, and disinfection, to achieve successful outcomes . Despite the fact that RET was regarded an alternative treatment option for an infected immature tooth, numerous studies demonstrated a lack of consistency in the growth of root lengthening, thickening, and apical closure . Apexification is a well-established treatment that has been shown to have favorable outcomes and consistent results, as evidenced by several clinical studies and case reports. The primary radiographic outcomes seen are the resolution of apical radiolucency, development of an apical barrier, and apical closure . Histological studies of apexification procedures in human and animal models demonstrated the formation of newly mineralized tissue above the apical foramen, defined as either bone-like tissue, cementum-like tissue, or osteodentin tissue . To our knowledge, there is limited histological evidence supporting the apexification treatment of an endodontically failed tooth. The present case describes the successful clinical and histological observations of an apexification procedure for an endodontically failed tooth with an open apex. A 24-year-old Caucasian female patient was referred to the Department of Endodontics at the College of Dentistry, King Saud University, Riyadh, Saudi Arabia, to assess the right maxillary central incisor. The patient’s chief complaint was the presence of mild-to-moderate pain during biting and discoloration on her upper front teeth. The patient had a history of trauma to the anterior maxillary region 10 years ago, during which she underwent root canal treatment at a private clinic. The patient has no history of any systemic disease, and according to the American Society of Anesthesiologists (ASA) classification, she is class ASA I. A clinical examination of the right maxillary central incisor (#11) revealed a defective tooth-colored restoration and mild crown discoloration compared to the adjacent teeth ( A). The pulp testing, which involved applying Endo-Frost (Coltène/Whaledent GmbH+ Co. KG, Langenau, Germany) with a cotton pellet and using an electric pulp tester (Analytic Technology, Redmond, WA, USA), revealed no response. Percussion and palpation recorded mild tenderness and pain; the tooth showed no mobility, and periodontal probing depths were within normal limits. The preoperative periapical radiograph revealed an inadequate root canal filling that was short of the apex, accompanied by defective tooth-colored restoration ( B). The apical region of the root exhibited a short root with a blunderbuss canal and an open apex, along with slight apical radiolucency. Based on clinical and radiographic findings, the endodontic diagnosis revealed a previously treated tooth with symptomatic apical periodontitis. Subsequent to a thorough discussion of the treatment options with the patient, the options presented include: an endodontic approach followed by the placement of a post/core and crown, extraction with or without subsequent replacement, or the option of no treatment. Based on the clinical assessment, the tooth has a favorable prognosis; thus, the indicated treatment option involves an endodontic treatment, succeeded by the placement of a post-core-crown restoration. The endodontic treatment options and procedures were explained to the patient, including non-surgical root canal retreatment with either regenerative endodontic treatment (RET), conventional calcium hydroxide apexification, or one-step apexification. Following consultation with the prosthodontist, regenerative endodontic treatment was excluded due to the necessity of a post in the root canal space to support the ceramic crown; thus, one-step apexification was selected. Informed written consent was obtained from the patient to perform a one-step apexification procedure after engaging in a discussion regarding the treatment of the tooth. There was no ethical conflict. 2.1. First Treatment Visit The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). 2.2. Second Treatment Visit At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). 2.3. Follow-Up Visit Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . 2.4. Histologic Procedure Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. 2.5. Histologic Observation The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). The patient was anesthetized with 2% lidocaine with 1:100,000 epinephrine (Novocol Pharmaceutical, Cambridge, ON, Canada) using an infiltration technique. Tooth number 11 was isolated under a rubber dam, and the access cavity was re-opened. The gutta-percha was removed with H-files, and the working length was established using an electronic apex locater (Root ZX, J Morita MFQ Corp., Kyoto, Japan), measured at 0.5 mm short of the apex with a K-file #100 and confirmed by a radiograph ( A). The canal walls were not enlarged, and irrigation was conducted with 10 mL 1.5% sodium hypochlorite (NaOCl). A final flush with saline solution was performed, and the canal was dried with sterile paper points. Calcium hydroxide (UltraCal XS, Ultradent Products, Inc., South Jordan, UT, USA) medicament was placed in the root canal, then access cavity was sealed with temporary restorative material Cavit G (3M Deutschland GmbH, Seefeld, Germany) ( B), and the patient was given a second appointment 3 weeks later. All procedures were conducted under an operating microscope (ZEISS microscopy, Jena, Germany). At the second visit, the patient was asymptomatic. Tooth number 11 was isolated using a rubber dam after the administration of local anesthetic, and access to the canal had been accomplished. The root canal was thoroughly irrigated with 10 mL of 1.5% NaOCl followed by a final rinse with 5 mL of saline solution, and then dried with sterile paper points. TotalFill ® BC RRM™ Putty (FKG Dentaire, Sàrl Le Crêt-du-Locle, Switzerland) was introduced into the canal and compacted apically using schilders pluggers (DENTSPLY Caulk, Milford, DE, USA). A periapical radiograph was exposed to confirm adequate placement of the apical plug ( C). The remaining part of the canal was backfilled with injectable thermoplasticized gutta-percha. Then, the access cavity was restored with a Ketac™ Molar-Aplicap glass ionomer (3M Deutschland GmbH, Seefeld, Germany) and light-cured composite Filtek™ Z350 XT (3M Deutschland GmbH, Seefeld, Germany). Subsequently, a final periapical radiograph was conducted ( D). Clinical evaluation: The patient was recalled 6 months postoperatively, and after 2 years. The patient was asymptomatic during the follow up visits. Radiographical evaluation: A two-year follow-up periapical radiograph showed the formation of a calcific barrier at the root apex with a normal periapical area in comparison to the preoperative periapical radiograph . The objective assessment of the calcified bridge by radiographic imaging is as follows: Calcified bridge dimension: the radiopaque band observed at the root apex demonstrates a sufficient thickness of approximately 2 mm in width and 3.5 mm in length, extending across the entire width of the canal to ensure an adequate apical closure. Calcified bridge density: the radiopaque band exhibits uniformity, indicating consistent mineralization. Furthermore, the radiopacity is comparable to that of dentin or cementum and is clearly distinguishable from the surrounding radiolucent areas. During the subsequent follow-up visits, I have been informed that the prosthodontic treatment plan has been revised, as the patient in this case preferred not to go for post-core crown restoration, and she preferred to place an implant for long-term survival. Consequently, the treatment option presented to the patient involved the continuation of endodontic therapy in conjunction with orthodontic extrusion to maintain the bone level before the implant placement. The orthodontic extrusion period lasted for 6 months, and the elastic was changed once a week. Subsequently, 3 months of stabilization were needed for the healing processes. Tooth #11 was replaced by a dental implant with a length of 10 mm and a width of 5 mm via a conventional protocol. The prosthetic part was made by using a PFM crown. A post-operative photograph and periapical radiograph of the restored single-tooth implant are shown in . Permission for histologic examination of the tooth was obtained from the patient. After extraction ( A), the tooth was immediately placed in a 10% neutral buffered formalin solution for fixation. After that, the tooth was decalcified in 7% formic acid until complete decalcification. Then the specimen was rinsed with running tap water for 2 hours, dehydrated with ascending concentrations of alcohol (70%, 90%, and 100%), and embedded in paraffin. After that, longitudinal serial sections were obtained with a microtome set at 4 µm thick in a buccolingual direction, and the specimens were stained with hematoxylin-eosin. Samples were observed under a light microscope to determine the histologic features. The histologic findings showed the formation of mineralized tissue at the root apex ( C). The primary component of this recently developed apical barrier was a continuous layer of dentin-like tissue located adjacent to the apical plug, which was characterized by dentinal tubule structures ( D). Incremental layers of cementum-like tissue, which are most likely acellular cementum tissue, were formed adjacent to the dentin-like tissue ( E). Connective tissue with distinct collagen fibers was observed next to the cementum-like tissue ( F). Also, connective tissue with calcified areas were observed next to the dentin-like tissue ( G). This case report is presented in which mineralized apical tissue formation occurred in an endodontically failed maxillary central incisor with an open apex after the apexification procedure. The techniques used for managing the open apex in necrotic teeth, with or without apical periodontitis, went through many treatment phases, including conventional Ca(OH) 2 apexification, artificial apical plug apexification, and regenerative endodontic treatment, each exhibiting various advantages as well as drawbacks. Conventional apexification using Ca(OH) 2 has demonstrated reliable outcomes; however, several drawbacks have been noted, including the extended duration of treatment and the requirement for periodic replacement of the intracanal dressing, necessitating multiple visits and patient compliance. Additionally, there is an elevated risk of root fracture due to the prolonged presence of Ca(OH) 2 within the root canal, as well as an increased likelihood of recontamination of the root canal system due to failures in the temporary seal . To address these limitations, the artificial apical plug approach, referred to as one-step apexification, has been developed for managing such conditions. Nonetheless, this approach lacks the capacity to promote the thickening of canal walls and/or continued root growth . The RET approach, unlike to the apexification procedure, promotes the growth of immature roots, involving root thickness and lengthening, apical closure, and potential regeneration of tooth vitality . RET entails specific clinical considerations that must be adhered to in order to select the appropriate case. It is essential to consider patient and parental compliance, particularly given that the majority of cases involve young patients. Furthermore, it should be noted that the tooth does not necessitate the placement of a post or core within the pulp space, and the patient does not exhibit any allergies to the medications and antibiotics utilized in this procedure. While RET has indicated encouraging results, various limitations and adverse outcomes have been identified. This encompasses an extended treatment duration, numerous appointments for disinfection, variable histological results, possibility of crown discoloration, and the potential for treatment failure . In this particular case, RET was excluded since the tooth was being designated for a post and core procedure. Consequently, one-step apexification has been selected as a treatment option. The success of the apexification procedure depends on the deposition of the calcified barrier, which is controlled by the differentiation of the stem cells from the apical papilla (SCAP) that migrate from the healing periradicular tissues . The molecular foundation of the apexification healing process involves various growth factors, cytokines, transcription factors, and bone morphogenetic proteins (BMPs) that facilitate the differentiation of SCAP into dentin-like, cementum-like, bone-like tissues, and/or organic matrix via specific signaling pathways . The SCAP, derived from neural crest mesenchymal stem cells, are a distinct population with significant proliferative capacity, capable of self-renewal and exhibiting minimal immunogenicity . Furthermore, the SCAP are capable of remaining viable in an infected immature permanent tooth with apical periodontitis, hence they are regarded as an essential biological source for the formation of the pulp-dentin complex and the continuing process of root development . Prior histological studies indicated a variable response of apical tissue to the apexification procedure. An animal study conducted by Ham et al. demonstrated periapical healing and the formation of new calcified tissue, recognized as bone-like, cementum-like tissue, or osteodentin, at root apex of the infected, immature teeth . An additional animal study by Palma et al. indicated that the developed apical barrier predominantly included cellular cementum encircled by periodontal ligament in most teeth treated with MTA apexification . Yang et al. showed that the formed calcified barrier composed of immature hard tissue, connective tissue, and bone developed following calcium hydroxide apexification treatment of an immature human premolar tooth . In this study, the histologic evaluation revealed the formation of an apical calcified barrier formed at the root apex, which was primarily composed of dentin-like tissue and cementum-like tissue. The dentin-like tissue located adjacent to the apical plug, distinguished by the presence of dentinal tubule structures. Subsequently, the incremental layers of cementum-like tissue were identified, possibly representing acellular cementum tissue. Furthermore, regions of connective tissue exhibiting distinct collagen fibers were noted, along with connective tissue containing calcified patches. We are unable to correlate our findings with the published data, which exhibit considerable variability in the type of newly formed tissue, likely attributable to differing study standards; some employed animal jaw models while others examined human teeth, alongside variations in treatment provided prior to histological assessment. Additionally, to the best of the author’s knowledge, this is the first histological study of an endodontically failed tooth that underwent successful apexification treatment. The objective assessment of the calcified bridge enables clinicians to ascertain the effectiveness of the formed bridge in sealing the apex and supporting periapical healing. The specific characteristics of the calcified bridge, including size, dimension, and density, can be assessed using radiographic imaging techniques such as periapical radiography or cone beam computed tomography. The radiograph in this investigation indicated a radiopaque structure at the apex of the root canal, consistent with a mineralized barrier. The calcified bridge exhibits adequate dimensions, measuring approximately 2 mm in width and 3.5 mm in length. The density and radiographic characteristics indicate sufficient mineralization and closure of the apical foramen. These findings are consistent with previous studies reporting the formation of calcified barriers during apexification procedures . Numerous biological factors that contribute to the failure of endodontic treatment have been identified. Nevertheless, the most prominent cause of failure is the persistence or regrowth of intraradicular infection . The disinfection of root canal system in endodontically failed teeth is of great concern and may provide obstacles when managing an infected immature tooth with thin dentin walls when compared to their matured counterparts . Evidence indicated that the use of Ca(OH) 2 medicament in MTA apexification treatments considerably promoted periodontal tissue repair and regeneration. The majority of reported cases of apexification procedures, including the current report, were conducted through two clinical sessions, during which Ca(OH) 2 was applied as an intracanal medicament . The selection of material for the apical plug has a significant impact in the apexification outcomes. It must exhibit superior biocompatibility, facilitate stem cell migration and differentiation, possess antimicrobial properties, remain insoluble, be user-friendly, and not induce discoloration . In addition to Ca(OH) 2 and MTA materials, contemporary literature supports the use of calcium silicate bioceramic materials for apical barrier formation . Interestingly, long-term prognostic studies demonstrated that apexification had high survival rates, irrespective of the type of bioactive material employed. High survival rates of Ca(OH) 2 apexification have been reported to reach 86%, with an average follow-up duration of five years . A recent long-term survival study of an immature traumatized incisors, indicated a median survival rate of 10 years for Ca(OH) 2 apexification and 16 years with MTA apexification . A retrospective study with an average follow-up duration of 3.3 years revealed that 86.3% of teeth treated with Biodentine ™ as an apical plug exhibited complete healing or shown symptoms of healing . A critical consideration in the treatment of teeth with wide-open apices is the avoidance of periapical extrusion of the apical plug filling material into the periradicular tissue. The excessive filling or extension of the apical filling material has been demonstrated in prior histological investigations to correlate with significant inflammatory cell infiltration and the lack of apical barrier tissue development . This inflammatory process is thought to have impeded the repair of periodontal tissue, hence interfering with the formation of the hard tissue barrier. It has been recommended to employ a matrix at the periapex in wide-open apices to control the compaction of MTA material and prevent its extrusion. A variety of biocompatible materials have been documented in the literature for this purpose, including dentin chips, bovine bone xenografts, calcium phosphate, oxidized cellulose, and platelet-rich fibrin . In the current study, we used a calcium silicate bioceramic material (TotalFill ® BC RRM™ Putty) as an apical plug, which is a pre-mixed condensable putty that allows for controlled administration without the necessity of an apical matrix. An interdisciplinary approach, along with accurate diagnostics, is essential for achieving improved, conservative, and predictable outcomes in aesthetic areas. The endodontist performs a crucial role in advising patients regarding the decision-making process between tooth preservation and extraction. This encompasses a discussion of the advantages, risks, and long-term consequences related to each of the options . In regard to the present case, endodontic therapy, succeeded by post-core-crown restoration, was identified as the preferred treatment modality. Nonetheless, in accordance with the patient’s preferences, the treatment plan was amended to accommodate extraction followed by implant replacement. Orthodontic extrusion is being implemented as a treatment modality that enhance both hard and soft tissue aspects prior to the implantation of dental implants . The patient was satisfied with the color, morphology, and margins of the cemented restoration. The present case demonstrates the clinical and radiographical success of an endodontically failed permanent incisor with an open apex after an apexification procedure. A two-year follow-up visit revealed the absence of signs and symptoms and hard tissue formation at the root apex. The histological evaluation of the newly formed mineralized tissue at the root apex revealed the formation of a continuous layer of dentin-like tissue with an identifiable dentinal tubule structure and the formation of an incremental layers of cementum-like tissue. In addition, connective tissue with distinct collagen fibers and connective tissue with calcified areas were noted. |
Metagenomic Insights into the Enhancement of Bioavailable Nitrogen in Continuous Cropping Soil Through the Application of Traditional Chinese Medicine Residue Following Fumigation | 19faef6d-09e0-4396-9fd2-4814a1becdda | 11675737 | Microbiology[mh] | Microbial ecological imbalance in the rhizosphere is a major contributor to obstacles in continuous cropping systems, disrupting the interdependent ecosystem formed by plants, soil, and microorganisms. This imbalance exacerbates the degradation of the soil microenvironment, impairs plant health, and promotes the excessive proliferation of pathogens, ultimately leading to outbreaks of soil-borne diseases . While soil fumigation is effective in mitigating soil-borne diseases, it adversely affects the structure of soil microbial communities, with different fumigants having varying impacts. For example, cottonseed fumigation reduces the populations of soil fungi, bacteria, and actinomycetes , whereas chloropicrin fumigation decreases the diversity of bacterial and fungal communities . Research has demonstrated that combining soil fumigation with bio-organic fertilizers is an effective integrated pest management strategy. This approach suppresses pathogenic microbes while enhancing beneficial microorganisms and restoring the balance of microbial community to control diseases. Applying chemical fertilizers following fumigation can aid in the recovery of beneficial soil microorganisms and increase the mortality of pathogenic microbes . Additionally, fumigation can directly inhibit fungal pathogens and, through the use of organic amendments, indirectly suppress both fungal and bacterial pathogens by altering microbial communities . Therefore, the complementary use of organic amendments and soil fumigants offers a promising strategy for controlling soil-borne diseases. Assessing the impact of combined fumigation and organic fertilizer application on soil structure is essential for identifying green organic fertilizers that facilitate the recovery of soil microbial communities. Traditional Chinese medicine residue (TCMR) refers to the solid plant-based residue remaining after the extraction of pharmaceutical constituents from medicinal materials, which contains valuable soil nutrients . Conventional disposal methods such as incineration, landfill, and stacking face issues such as resource wastage and environmental pollution. Converting TCMR into high-value products represents an effective approach for resource utilization. China produces around 70 million tons of TCMR annually, which is sufficient to meet the demands of agricultural practices . However, research on the effective use of TCMR in agricultural soils is still limited. TCMR contains numerous nutrients and active compounds, and it has the potential to improve the stability of soil microbial communities, enhance soil fertility, promote plant growth, and hold significant prospects for resource utilization and agricultural production . Currently, in-depth investigations into the application of TCMR in agricultural continuous cropping soils remain scarce, and it is poorly understood whether TCMR can be effectively utilized as an organic fertilizer to enhance soil fertility post-fumigation, promote crop growth, and restore the microbial community in continuous cropping soils. Therefore, we hypothesized that (1) TCMR can alter soil nutrients, particularly nitrogen and organic matter, in continuous cropping soils; (2) the application of TCMR to fumigated soils can result in enhanced soil quality, suggesting its potential use as an organic fertilizer; and (3) providing a novel fertilization strategy, the application of TCMR in fumigated soil can improve the microbial community structure and increase species diversity related to nitrogen cycling genes. In this study, we employed the rhizosphere soil from pepper crops cultivated continuously for two years as the research subject to investigate the effects of TCMR application post-fumigation on crop rhizosphere soil nutrients, to explore the influences of TCMR application on the rhizosphere microbial community, and to examine the correlation between the microbial community and the soil physicochemcial properties. 2.1. Soil Sampling and Treatments The study was conducted on rhizosphere soil (0–20 cm) from farmland near Hunan University of Science and Engineering (26°20′ N, 111°61′ E), where pepper had been continuously planted for 2 years. The soil was classified as yellow-cinnamon soil. The area has an average elevation of 1250 m, belongs to the transition zone between the temperate and tropical zones., with an average annual temperature of 18.0 °C, a frost-free period of 285–311 days, and an average annual rainfall of 1595 mm. In the second year of planting (7 November 2022), after the pepper harvest, rhizosphere soil was collected from a depth of 0–20 cm using a sampler. Five soil cores from the rhizosphere of the pepper plants were randomly collected from each region and thoroughly mixed to form a composite rhizosphere soil sample. Subsequently, any sand and plant residues were removed, and the samples were stored in airtight bags before being transported back to the laboratory in an incubator. The samples were sieved through a 2-mm sieve for rhizosphere soil microcosm experiments and analysis of rhizosphere soil physicochemical properties. 2.2. Soil Microcosm Experiments Five treatment groups were established, including dazomet fumigation without TCMR application (M1, 300 kg/hm 2 , Jiangsu Qili New Energy Technology Co., Ltd., Taizhou, China), dazomet fumigation (300 kg/hm 2 ) combined with TCMR (20 t/hm 2 ) application (MC), 42% metam-sodium fumigation without TCMR application (W1, 400 kg/hm 2 , Shandong Lifan Chemical Industry Co., Ltd., Tai’an, China), 42% metam-sodium fumigation (400 kg/hm 2 ) combined with TCMR (20 t/hm 2 ) application (WC), unfumigated soil with TCMR application (C1, 20 t/hm 2 ), with each treatment repeated three times. The TCMR from Andrographis paniculata is provided by Jinan Baishun Technology Co., Ltd. (Jinan, China), and SOM 46.68%, TN 1.45%, and P 2 O 5 1.18%. The TCMR was moistened and fermented with water for a week before application. The absolute soil water content was adjusted to 21% before the incubation, corresponding to a water-filled pore space (WFPS) of 45%. Soil samples of 300 g each (dry weight) were packed into 500 mL Duran wide-mouth glass bottles (Schott AG, Mainz, Germany) to simulating natural ecosystems under human-controlled settings . Chemical fumigants were added to the soil samples, while the control group received an equal volume of sterilized distilled water. The bottles were then sealed with stoppers and incubated at 28 °C. After 10 days of fumigation, the caps were removed, and the bottles were placed in a fume hood until the fumigants had completely dissipated. TCMR was subsequently added to the soil samples. Fresh air was introduced into all bottles using a pump, and the bottles were resealed and returned to the 28 °C incubator for an additional 59 days . Remove the cap and mix the soil sample with a sampling spoon. The physicochemical parameters and molecular ecological measurements of the soil samples were analyzed. If analysis could not be conducted in a timely manner, the samples were stored in a refrigerator at −80 °C. 2.3. Soil Nutrient Determination Soil water content (SWC) was determined by drying the soil at 105 °C for 12 h. The pH (soil/water = 1:2.5) was measured using a pH meter, and soil organic matter (SOM) was determined using the H 2 SO 4 -K 2 Cr 2 O 7 oxidation capacity method. Soil total phosphorus (TP) and available phosphorus (AP) were determined using the molybdenum antimony colorimetric method. Total nitrogen (TN) was measured using the Kjeldahl method . Soil nitrate (NO 3 − -N) and ammonium (NH 4 + -N) were extracted with potassium chloride and subjected to continuous flow analyzer analysis. Urease, catalase, sucrase, and neutral phosphatase activities were determined using a kit provided by Nanjing Jitest Biotechnology Co., Ltd. (Nanjing, China) 2.4. DNA Extraction and High-Throughput Sequencing The Fsat DNA Spin kit (MP Biomedicals, Santa Ana, CA, USA) was used to extract microbial DNA from 0.5 g of fresh soil, and the concentration and purity of the extracted DNA were determined using a NanoDrop spectrophotometer (Thermo Scientific, Wilmington, NC, USA). Genomic DNA sample was fragmented by sonication to a size of 350 bp. Then DNA fragments were end-polished, A-tailed, and ligated with a full-length adapter for Illumina sequencing. Sequencing was performed on a NovaSeq 6000 platform (Illumina Inc., San Diego, CA, USA) at Wekemo Tech Co., Ltd., Shenzhen, China. Raw sequence data with an average of 3.31 Gb (gigabases) were obtained for each sample and deposited into the Genome Sequence Archive (GSA) with accession number PRJCA032586. Clean data were aligned to the host database using Bowtie2 (version 2.3.5.1, http://bowtie-bio.sourceforge.net/bowtie2/index.shtml , 14 July 2023) by default to filter out host-origin reads for subsequent analysis. The quality and effectiveness of the quality control process were assessed using FastQC (version 0.11.9, https://en.wikipedia.org/wiki/Fastq , 14 July 2023). Kraken2 (ver. 2.0.7-β) and the self-build microbial database (Sequences belonging to bacteria, fungi, archaea, and viruses were screened from NT nucleic acid database and RefSeq whole genome database of NCBI) were used to identify the species contained in the samples, and then Bracken was used to predict the actual relative abundance of species in the samples. The clean reads, after quality control and host removal, were used for blast against the database (Uniref90, https://www.uniprot.org/uniref?query=* , 25 July 2024) using Humann3 software (ver. 3.6) based on Diamond (ver. 0.8.22, https://github.com/bbuchfink/diamond , 25 July 2024). Statistical analysis of the relative abundance of nitrogen-cycle-related genes was based on the correspondence between KEGG (ver. 94.2, http://www.genome.jp/kegg/ , 17 August 2024) and UniRef90 (mainly from LinkDB). 2.5. Data Analyses Prior to analysis, logarithmic or square root conversion was performed on soil physicochemical properties, and the OTU table was flattened. R-4.1.2 was used to carry out the graphics and statistical analyses in this study. One-way analysis of variance (ANOVA) and multiple comparisons using Duncan’s method were conducted to analyze the physicochemical properties, microbial diversity, and functional difference of the rhizospheric soil using SPSS Statistics 23.0. In order to study the species composition and diversity information of the samples, all valid sequences of all samples with Kraken2 (parameter–confidence 0.2) were annotated and classified. Biomarkers of different groups were defined using LEfSe analysis to identify biological markers with significant differences between groups. A threshold logarithmic LDA score of 4.0 was applied. Non-metric multi-dimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on the Bray–Curtis distance were employed to examine the differences in the gene composition among treatments. Metagenomic data were compared and annotated with Level3 pathway information in KEGG database to explore the relationship between species of microbiome and nitrogen metabolism. DiTing (version 0.9) software was used to infer and compare biogeochemical pathways in metagenomic data. Pearson’s correlation analysis was used to reveal the correlations between N-cycle-related processes, normalized abundances of N-cycle-related genes, and soil properties, and the correlations were visualized in heatmap plots. The study was conducted on rhizosphere soil (0–20 cm) from farmland near Hunan University of Science and Engineering (26°20′ N, 111°61′ E), where pepper had been continuously planted for 2 years. The soil was classified as yellow-cinnamon soil. The area has an average elevation of 1250 m, belongs to the transition zone between the temperate and tropical zones., with an average annual temperature of 18.0 °C, a frost-free period of 285–311 days, and an average annual rainfall of 1595 mm. In the second year of planting (7 November 2022), after the pepper harvest, rhizosphere soil was collected from a depth of 0–20 cm using a sampler. Five soil cores from the rhizosphere of the pepper plants were randomly collected from each region and thoroughly mixed to form a composite rhizosphere soil sample. Subsequently, any sand and plant residues were removed, and the samples were stored in airtight bags before being transported back to the laboratory in an incubator. The samples were sieved through a 2-mm sieve for rhizosphere soil microcosm experiments and analysis of rhizosphere soil physicochemical properties. Five treatment groups were established, including dazomet fumigation without TCMR application (M1, 300 kg/hm 2 , Jiangsu Qili New Energy Technology Co., Ltd., Taizhou, China), dazomet fumigation (300 kg/hm 2 ) combined with TCMR (20 t/hm 2 ) application (MC), 42% metam-sodium fumigation without TCMR application (W1, 400 kg/hm 2 , Shandong Lifan Chemical Industry Co., Ltd., Tai’an, China), 42% metam-sodium fumigation (400 kg/hm 2 ) combined with TCMR (20 t/hm 2 ) application (WC), unfumigated soil with TCMR application (C1, 20 t/hm 2 ), with each treatment repeated three times. The TCMR from Andrographis paniculata is provided by Jinan Baishun Technology Co., Ltd. (Jinan, China), and SOM 46.68%, TN 1.45%, and P 2 O 5 1.18%. The TCMR was moistened and fermented with water for a week before application. The absolute soil water content was adjusted to 21% before the incubation, corresponding to a water-filled pore space (WFPS) of 45%. Soil samples of 300 g each (dry weight) were packed into 500 mL Duran wide-mouth glass bottles (Schott AG, Mainz, Germany) to simulating natural ecosystems under human-controlled settings . Chemical fumigants were added to the soil samples, while the control group received an equal volume of sterilized distilled water. The bottles were then sealed with stoppers and incubated at 28 °C. After 10 days of fumigation, the caps were removed, and the bottles were placed in a fume hood until the fumigants had completely dissipated. TCMR was subsequently added to the soil samples. Fresh air was introduced into all bottles using a pump, and the bottles were resealed and returned to the 28 °C incubator for an additional 59 days . Remove the cap and mix the soil sample with a sampling spoon. The physicochemical parameters and molecular ecological measurements of the soil samples were analyzed. If analysis could not be conducted in a timely manner, the samples were stored in a refrigerator at −80 °C. Soil water content (SWC) was determined by drying the soil at 105 °C for 12 h. The pH (soil/water = 1:2.5) was measured using a pH meter, and soil organic matter (SOM) was determined using the H 2 SO 4 -K 2 Cr 2 O 7 oxidation capacity method. Soil total phosphorus (TP) and available phosphorus (AP) were determined using the molybdenum antimony colorimetric method. Total nitrogen (TN) was measured using the Kjeldahl method . Soil nitrate (NO 3 − -N) and ammonium (NH 4 + -N) were extracted with potassium chloride and subjected to continuous flow analyzer analysis. Urease, catalase, sucrase, and neutral phosphatase activities were determined using a kit provided by Nanjing Jitest Biotechnology Co., Ltd. (Nanjing, China) The Fsat DNA Spin kit (MP Biomedicals, Santa Ana, CA, USA) was used to extract microbial DNA from 0.5 g of fresh soil, and the concentration and purity of the extracted DNA were determined using a NanoDrop spectrophotometer (Thermo Scientific, Wilmington, NC, USA). Genomic DNA sample was fragmented by sonication to a size of 350 bp. Then DNA fragments were end-polished, A-tailed, and ligated with a full-length adapter for Illumina sequencing. Sequencing was performed on a NovaSeq 6000 platform (Illumina Inc., San Diego, CA, USA) at Wekemo Tech Co., Ltd., Shenzhen, China. Raw sequence data with an average of 3.31 Gb (gigabases) were obtained for each sample and deposited into the Genome Sequence Archive (GSA) with accession number PRJCA032586. Clean data were aligned to the host database using Bowtie2 (version 2.3.5.1, http://bowtie-bio.sourceforge.net/bowtie2/index.shtml , 14 July 2023) by default to filter out host-origin reads for subsequent analysis. The quality and effectiveness of the quality control process were assessed using FastQC (version 0.11.9, https://en.wikipedia.org/wiki/Fastq , 14 July 2023). Kraken2 (ver. 2.0.7-β) and the self-build microbial database (Sequences belonging to bacteria, fungi, archaea, and viruses were screened from NT nucleic acid database and RefSeq whole genome database of NCBI) were used to identify the species contained in the samples, and then Bracken was used to predict the actual relative abundance of species in the samples. The clean reads, after quality control and host removal, were used for blast against the database (Uniref90, https://www.uniprot.org/uniref?query=* , 25 July 2024) using Humann3 software (ver. 3.6) based on Diamond (ver. 0.8.22, https://github.com/bbuchfink/diamond , 25 July 2024). Statistical analysis of the relative abundance of nitrogen-cycle-related genes was based on the correspondence between KEGG (ver. 94.2, http://www.genome.jp/kegg/ , 17 August 2024) and UniRef90 (mainly from LinkDB). Prior to analysis, logarithmic or square root conversion was performed on soil physicochemical properties, and the OTU table was flattened. R-4.1.2 was used to carry out the graphics and statistical analyses in this study. One-way analysis of variance (ANOVA) and multiple comparisons using Duncan’s method were conducted to analyze the physicochemical properties, microbial diversity, and functional difference of the rhizospheric soil using SPSS Statistics 23.0. In order to study the species composition and diversity information of the samples, all valid sequences of all samples with Kraken2 (parameter–confidence 0.2) were annotated and classified. Biomarkers of different groups were defined using LEfSe analysis to identify biological markers with significant differences between groups. A threshold logarithmic LDA score of 4.0 was applied. Non-metric multi-dimensional scaling (NMDS) and principal coordinate analysis (PCoA) based on the Bray–Curtis distance were employed to examine the differences in the gene composition among treatments. Metagenomic data were compared and annotated with Level3 pathway information in KEGG database to explore the relationship between species of microbiome and nitrogen metabolism. DiTing (version 0.9) software was used to infer and compare biogeochemical pathways in metagenomic data. Pearson’s correlation analysis was used to reveal the correlations between N-cycle-related processes, normalized abundances of N-cycle-related genes, and soil properties, and the correlations were visualized in heatmap plots. 3.1. Responses of Rhizospheric Soil Nutrients to TCMR Application TCMR significantly enhanced the total nutrient content of the rhizospheric soil. C1, MC, and WC significantly increased the contents of TP, TN, and NO 3 − -N, while NH 4 + -N contents decreased, compared with CK. When combined with fumigants, the contents of TP, TN, and NO 3 − -N were elevated. TCMR application (C1, MC, and WC) showed higher TP, TN, and NO 3 − -N contents compared to fumigants treatment (W1 and M1). TCMR application also improved total soil enzyme activity, with W1 showing lower activity than WC and M1 showing lower activity than MC . 3.2. Responses of the Rhizosphere Microbial Community Composition and Diversity to TCMR Application TCMR application significantly impacted the α diversity of bacterial and fungal microbial communities, as illustrated in a–f. However, not all effects were statistically significant. The results of PCoA indicated that TCMR application significantly influenced the composition of the rhizosphere microbial community ( p < 0.01) ( a,b). At the phylum level, Pseudomonadota, Actinomycetota, and Bacillota were identified as dominant bacterial populations in the rhizosphere. Similarly, Ascomycota dominated the rhizosphere fungal community, with a notable increase in Pseudomonadota following TCMR application. Beneficial bacteria, such as Mesorhizobium, significantly increased in relative abundance, while pathogenic bacteria like Afipia decreased at the genus level ( a,b). LEfSe analysis revealed 23 genus-level biomarkers, including significant differences in the abundance of species such as Pseudarthrobacter, Achromobacter, Mesorhizobium, Afipia, and Pseudomonas ( c,d). 3.3. Relationships Among Rhizosphere Soil Nutrients, Keystone Species, and Community Building RDA analysis mainly relies on the R language VEGAN package and the visualization with ggplot2. The result showed that rhizospheric soil nutrients explained 69.89% and 58.35% of the keystone species variation in bacteria and fungal, respectively, and TN, TP, and urease significantly affected the composition of bacterial keystone species . TN and urease significantly influenced the composition of bacterial keystone species, while TP significantly influenced the composition of fungal keystone species. That is, soil nutrients regulated the microbial community construction process by affecting the community change of microbial keystone species. However, this result was not observed in the construction of fungal communities, which indicated that the process of construction of bacterial and fungal communities had different mechanisms. The abundance of keystone species was significantly correlated with TN, urease, TP, NO 3 − -N, and NH 4 + -N. TN, urease, and TP were identified as keystone factors affecting microbial community construction and composition ( p < 0.001). 3.4. Analysis of Nitrogen Metabolic Activity in Various Different Conditions Further analysis of nitrogen metabolism was conducted at KEGG pathway level 3 . The top 10 microbial communities involved in nitrogen metabolism were identified, including Rhodoplanes, Novibacillus thermophilus , Mesorhizobium, Pseudomonas, Gemmatirosa kalamazoonesis, Microbacterium, Bradyrhizobium pachyrhizi , and Streptomyces sp. CB03911. TCMR application increased the abundance of S. sp. CB03911, B. pachyrhizi , and Pseudomonas might be a species source that promotes nitrogen metabolism, while Rhodoplanes was associated with a decrease. 3.5. Responses of Functional Genes and Factors of Nitrogen Cycle in TCMR Application Different treatments had distinct impacts on soil microorganisms, with a multitude of functional genes involved in the nitrogen cycle. The KO abundance table revealed six nitrogen metabolic pathways, primarily denitrification and dissimilatory nitrate reduction pathways. Abundant gene clusters related to nitrogen cycling included narGHI , napAB , nirK , nirS , nirBD , and nrfAH ( a). The total abundance of genes such as can , cynT , nirB , glnA , GLUL , narK , NRT , and nrtP significantly increased with TCMR treatments (MC, WC, C1), but not with M1 and W1, while the total abundance of nxrB , nirK , narγ , narH , and GLUD1_2 significantly decreased ( b). Correlation analysis revealed that TN and urease were positively correlated with nirB , nirA , and nifHDK but negatively correlated with amoABC and hao . NO 3 − -N was negatively correlated with nirK and norBC genes, while NH 4 + -N was positively correlated with these genes. TP was positively correlated with narB and nasA ( c). TCMR significantly enhanced the total nutrient content of the rhizospheric soil. C1, MC, and WC significantly increased the contents of TP, TN, and NO 3 − -N, while NH 4 + -N contents decreased, compared with CK. When combined with fumigants, the contents of TP, TN, and NO 3 − -N were elevated. TCMR application (C1, MC, and WC) showed higher TP, TN, and NO 3 − -N contents compared to fumigants treatment (W1 and M1). TCMR application also improved total soil enzyme activity, with W1 showing lower activity than WC and M1 showing lower activity than MC . TCMR application significantly impacted the α diversity of bacterial and fungal microbial communities, as illustrated in a–f. However, not all effects were statistically significant. The results of PCoA indicated that TCMR application significantly influenced the composition of the rhizosphere microbial community ( p < 0.01) ( a,b). At the phylum level, Pseudomonadota, Actinomycetota, and Bacillota were identified as dominant bacterial populations in the rhizosphere. Similarly, Ascomycota dominated the rhizosphere fungal community, with a notable increase in Pseudomonadota following TCMR application. Beneficial bacteria, such as Mesorhizobium, significantly increased in relative abundance, while pathogenic bacteria like Afipia decreased at the genus level ( a,b). LEfSe analysis revealed 23 genus-level biomarkers, including significant differences in the abundance of species such as Pseudarthrobacter, Achromobacter, Mesorhizobium, Afipia, and Pseudomonas ( c,d). RDA analysis mainly relies on the R language VEGAN package and the visualization with ggplot2. The result showed that rhizospheric soil nutrients explained 69.89% and 58.35% of the keystone species variation in bacteria and fungal, respectively, and TN, TP, and urease significantly affected the composition of bacterial keystone species . TN and urease significantly influenced the composition of bacterial keystone species, while TP significantly influenced the composition of fungal keystone species. That is, soil nutrients regulated the microbial community construction process by affecting the community change of microbial keystone species. However, this result was not observed in the construction of fungal communities, which indicated that the process of construction of bacterial and fungal communities had different mechanisms. The abundance of keystone species was significantly correlated with TN, urease, TP, NO 3 − -N, and NH 4 + -N. TN, urease, and TP were identified as keystone factors affecting microbial community construction and composition ( p < 0.001). Further analysis of nitrogen metabolism was conducted at KEGG pathway level 3 . The top 10 microbial communities involved in nitrogen metabolism were identified, including Rhodoplanes, Novibacillus thermophilus , Mesorhizobium, Pseudomonas, Gemmatirosa kalamazoonesis, Microbacterium, Bradyrhizobium pachyrhizi , and Streptomyces sp. CB03911. TCMR application increased the abundance of S. sp. CB03911, B. pachyrhizi , and Pseudomonas might be a species source that promotes nitrogen metabolism, while Rhodoplanes was associated with a decrease. Different treatments had distinct impacts on soil microorganisms, with a multitude of functional genes involved in the nitrogen cycle. The KO abundance table revealed six nitrogen metabolic pathways, primarily denitrification and dissimilatory nitrate reduction pathways. Abundant gene clusters related to nitrogen cycling included narGHI , napAB , nirK , nirS , nirBD , and nrfAH ( a). The total abundance of genes such as can , cynT , nirB , glnA , GLUL , narK , NRT , and nrtP significantly increased with TCMR treatments (MC, WC, C1), but not with M1 and W1, while the total abundance of nxrB , nirK , narγ , narH , and GLUD1_2 significantly decreased ( b). Correlation analysis revealed that TN and urease were positively correlated with nirB , nirA , and nifHDK but negatively correlated with amoABC and hao . NO 3 − -N was negatively correlated with nirK and norBC genes, while NH 4 + -N was positively correlated with these genes. TP was positively correlated with narB and nasA ( c). 4.1. Response of the Microbial Community Construction to Added TCMR and Its Influencing Factors The establishment and maintenance of soil biomes is an intricate process involving the interplay of multiple factors . Fumigants not only directly impact crop yield but also indirectly influence plant productivity through their effects on the soil microbiome . The application of bio-organic fertilizer after chemical fumigation is critical for ensuring a balanced nutrient supply and promoting the recovery of soil microbial communities, which is essential for restoring rhizomatic immune barriers . Our findings indicate that TCMR application significantly influences the construction of the rhizosphere microbial community. Different fumigants exert varying effects on soil microorganisms. Compared to dazomet fumigation, metam-sodium fumigation led to a significant reduction in bacterial diversity while causing a notable increase in the fungal community, consistent with previous research . The addition of TCMR substantially promoted bacterial community construction but did not significantly affect the fungal community. Keystone species in the bacterial community primarily belonged to Pseudomonadota, Actinomycetota, and Bacillota . Notably, the relative abundance of beneficial bacteria such as Mesorhizobium increased, while pathogenic bacteria such as Afipia decreased. Additionally, Actinomycetota played a role in decomposing organic matter, regulating the soil microenvironment, and maintaining soil ecological balance by reducing pathogenic bacteria . Ascomycota was the dominant fungal population at the phylum level, and TCMR application increased their relative abundance. This could be linked to the presence of eutrophic fungi, which are associated with enhanced soil fertility . We observed that bacterial community composition was predominantly influenced by soil nutrient levels, with TN, SOM, and urease playing significant roles in shaping bacterial communities, consistent with previous research . The difference is that TP, SOM, and TN significantly affected the composition of keystone fungal species, which is not consistent with the conclusions of some studies . These may be attributable to differences in land use and fertilization practices. Additionally, distinct mechanisms underlie the construction of bacterial and fungal communities . In future investigations, factors such as root exudates, root morphology, and pot planting should be taken into consideration. Moreover, we found that TCMR treatment led to a significant increase in TN, TP, NO 3 − -N, and urease levels in the rhizosphere soil, and a significant decrease in NH 4 + -N . In soil, urease catalyzes the breakdown of urea to produce amines. Although amines is the preferred nitrogen source for both microbes and plants, nitrogen fixation, nitrite reductase, and the nitrate/nitrite transport systems are controlled by amines induced inhibition, which can ultimately affect crop yields . TCMR might regulate nitrogen metabolism through affecting amines metabolism. These factors not only influenced the microbial functional model for carbon and phosphorus cycling but also played a pivotal role in shaping nitrogen cycling functional gene combinations . Our results suggest that microbial functional traits related to soil nutrient cycling may be highly responsive to soil nitrogen bioavailability . The increased abundance of nitrogen-metabolizing microbial communities, such as Rhodoplanes, N. thermophilus , Mesorhizobium, and others, further supports this conclusion. 4.2. Responses of Keystone Genes in the Nitrogen Cycle to Added TCMR and Their Correlations with the Microbial Community Structure Microbial activity is essential in regulating nitrogen conversion, involving six classical processes: nitrogen fixation, nitrification, denitrification, assimilatory nitrate reduction (ANRA), dissimilatory nitrate reduction (DNRA), and ammonification . In this study, we identified six nitrogen metabolic pathways, with denitrification and DNRA being the most prominent. Consistent with previous research , the combined application of TCMR and fumigation increased the abundance of denitrification genes relative to fumigation alone. Heterotrophic microorganisms, which are primarily responsible for denitrification, benefit from the addition of organic materials as these enhance nutrient availability, promoting the growth of denitrifying microorganisms . Furthermore, the DNRA pathway was found to mitigate nitrogen loss by converting soil nitrate into ammonium nitrogen. The combined application of TCMR and fumigation not only regulated soil PH and reduced heavy metal content but also facilitated the rapid restoration of nitrogen cycle processes in the soil ecosystem . The changes in microbial functional genes reflect the internal driving forces of soil nitrogen cycling. Under TCMR application, the reduction of NO 3 − to NO 2 − was significantly enhanced, primarily catalyzed by membrane-bound nitrate reductases (NAR, narG ), periplasmic nitrate reductases (NAP, napA ), or assimilation nitrate reductase (NAS, nasA , nirA ) . Additionally, TCMR fertilization significantly increased the abundance of carbonic anhydrase genes ( can , cynT ), which are involved in CO 2 capture and the promotion of calcium carbonate precipitation through heterotrophic denitrification . Interestingly, the application of TCMR led to a marked increase in the abundance of DNRA-related gene nirB while decreasing the abundance of the denitrification gene nirK . The genes nirB and nirK encode nitrite reductases, which are crucial for nitrogen metabolism. Gene nirB also plays a role in nitrogen assimilation . Amines inhibition regulates the expression of the structural gene for nitrite reductase during nitrogen assimilation. The application of TCMR reduces amines concentrations in the rhizosphere, leading to an increase in the abundance and activity of gene nirB . This not only accelerates nitrogen conversion but also enhances nitrogen uptake and utilization. These findings diverge from some previous studies , potentially indicating unique interactions between nitrogen cycling genes and TCMR treatments. The combined application of TCMR and fumigation significantly enriched soil TN, and such treatment was positively associated with genes like nirB , nirA, and nxrAB , and negatively correlated with hao and amoABC . These linkages suggest that TCMR application inhibits microbial nitrogen assimilation while promoting microbial nitrogen mobilization, resulting in an accumulation of bioavailable nitrogen. The addition of TCMR and the accumulation of bioavailable nitrogen after fumigation were significantly correlated with soil nitrogen content in the whole habitat. This may suggest that nirB and nirK are keystone genes regulating soil nitrogen cycling process, promoting the mobilization of nitrogen in continuous cropping, improving species diversity after fumigation, and promoting the sustainable use of nitrogen. The establishment and maintenance of soil biomes is an intricate process involving the interplay of multiple factors . Fumigants not only directly impact crop yield but also indirectly influence plant productivity through their effects on the soil microbiome . The application of bio-organic fertilizer after chemical fumigation is critical for ensuring a balanced nutrient supply and promoting the recovery of soil microbial communities, which is essential for restoring rhizomatic immune barriers . Our findings indicate that TCMR application significantly influences the construction of the rhizosphere microbial community. Different fumigants exert varying effects on soil microorganisms. Compared to dazomet fumigation, metam-sodium fumigation led to a significant reduction in bacterial diversity while causing a notable increase in the fungal community, consistent with previous research . The addition of TCMR substantially promoted bacterial community construction but did not significantly affect the fungal community. Keystone species in the bacterial community primarily belonged to Pseudomonadota, Actinomycetota, and Bacillota . Notably, the relative abundance of beneficial bacteria such as Mesorhizobium increased, while pathogenic bacteria such as Afipia decreased. Additionally, Actinomycetota played a role in decomposing organic matter, regulating the soil microenvironment, and maintaining soil ecological balance by reducing pathogenic bacteria . Ascomycota was the dominant fungal population at the phylum level, and TCMR application increased their relative abundance. This could be linked to the presence of eutrophic fungi, which are associated with enhanced soil fertility . We observed that bacterial community composition was predominantly influenced by soil nutrient levels, with TN, SOM, and urease playing significant roles in shaping bacterial communities, consistent with previous research . The difference is that TP, SOM, and TN significantly affected the composition of keystone fungal species, which is not consistent with the conclusions of some studies . These may be attributable to differences in land use and fertilization practices. Additionally, distinct mechanisms underlie the construction of bacterial and fungal communities . In future investigations, factors such as root exudates, root morphology, and pot planting should be taken into consideration. Moreover, we found that TCMR treatment led to a significant increase in TN, TP, NO 3 − -N, and urease levels in the rhizosphere soil, and a significant decrease in NH 4 + -N . In soil, urease catalyzes the breakdown of urea to produce amines. Although amines is the preferred nitrogen source for both microbes and plants, nitrogen fixation, nitrite reductase, and the nitrate/nitrite transport systems are controlled by amines induced inhibition, which can ultimately affect crop yields . TCMR might regulate nitrogen metabolism through affecting amines metabolism. These factors not only influenced the microbial functional model for carbon and phosphorus cycling but also played a pivotal role in shaping nitrogen cycling functional gene combinations . Our results suggest that microbial functional traits related to soil nutrient cycling may be highly responsive to soil nitrogen bioavailability . The increased abundance of nitrogen-metabolizing microbial communities, such as Rhodoplanes, N. thermophilus , Mesorhizobium, and others, further supports this conclusion. Microbial activity is essential in regulating nitrogen conversion, involving six classical processes: nitrogen fixation, nitrification, denitrification, assimilatory nitrate reduction (ANRA), dissimilatory nitrate reduction (DNRA), and ammonification . In this study, we identified six nitrogen metabolic pathways, with denitrification and DNRA being the most prominent. Consistent with previous research , the combined application of TCMR and fumigation increased the abundance of denitrification genes relative to fumigation alone. Heterotrophic microorganisms, which are primarily responsible for denitrification, benefit from the addition of organic materials as these enhance nutrient availability, promoting the growth of denitrifying microorganisms . Furthermore, the DNRA pathway was found to mitigate nitrogen loss by converting soil nitrate into ammonium nitrogen. The combined application of TCMR and fumigation not only regulated soil PH and reduced heavy metal content but also facilitated the rapid restoration of nitrogen cycle processes in the soil ecosystem . The changes in microbial functional genes reflect the internal driving forces of soil nitrogen cycling. Under TCMR application, the reduction of NO 3 − to NO 2 − was significantly enhanced, primarily catalyzed by membrane-bound nitrate reductases (NAR, narG ), periplasmic nitrate reductases (NAP, napA ), or assimilation nitrate reductase (NAS, nasA , nirA ) . Additionally, TCMR fertilization significantly increased the abundance of carbonic anhydrase genes ( can , cynT ), which are involved in CO 2 capture and the promotion of calcium carbonate precipitation through heterotrophic denitrification . Interestingly, the application of TCMR led to a marked increase in the abundance of DNRA-related gene nirB while decreasing the abundance of the denitrification gene nirK . The genes nirB and nirK encode nitrite reductases, which are crucial for nitrogen metabolism. Gene nirB also plays a role in nitrogen assimilation . Amines inhibition regulates the expression of the structural gene for nitrite reductase during nitrogen assimilation. The application of TCMR reduces amines concentrations in the rhizosphere, leading to an increase in the abundance and activity of gene nirB . This not only accelerates nitrogen conversion but also enhances nitrogen uptake and utilization. These findings diverge from some previous studies , potentially indicating unique interactions between nitrogen cycling genes and TCMR treatments. The combined application of TCMR and fumigation significantly enriched soil TN, and such treatment was positively associated with genes like nirB , nirA, and nxrAB , and negatively correlated with hao and amoABC . These linkages suggest that TCMR application inhibits microbial nitrogen assimilation while promoting microbial nitrogen mobilization, resulting in an accumulation of bioavailable nitrogen. The addition of TCMR and the accumulation of bioavailable nitrogen after fumigation were significantly correlated with soil nitrogen content in the whole habitat. This may suggest that nirB and nirK are keystone genes regulating soil nitrogen cycling process, promoting the mobilization of nitrogen in continuous cropping, improving species diversity after fumigation, and promoting the sustainable use of nitrogen. The combined application of TCMR after chemical fumigation substantially aids in the reconstruction of bacterial communities while having no obvious effect on fungal populations. This treatment fosters an increase in microbial species diversity and promotes the abundance of functional genes related to nitrogen cycling. Specifically, the diversity and abundance of denitrification and DNRA genes were positively influenced by TCMR application. Soil organic nitrogen was found to be closely associated with keystone genes such as nirB and nirK , which regulate microbial nitrogen uptake and transport. Moreover, TCMR fertilization led to the enrichment of the can and cynT genes, which are involved in microbial mineralization processes. These genes promote the precipitation of calcium carbonate through heterotrophic denitrification and CO 2 capture, highlighting the coupling of microbial metabolic processes. In conclusion, the application of TCMR following fumigation effectively promotes the construction of the soil microbial community and significantly influences the functional dynamics of the soil nitrogen cycle, ultimately enhancing the accumulation of bioavailable nitrogen, and increasing crop yield. |
A powerful and versatile new fixation protocol for immunostaining and in situ hybridization that preserves delicate tissues | 2ae6abf0-f7d5-4dfd-94ae-fce5b7b599e2 | 11533299 | Anatomy[mh] | Regeneration is the ability to restore tissues or organs lost to injury and it varies widely among metazoans. While some animals like fish and axolotls are capable of regenerating certain appendages and tissues, others like planarian flatworms and Hydra are capable of whole-body regeneration. . The cellular and molecular activities that drive regeneration are not yet fully understood. Understanding the molecular changes that take place in the delicate wound epidermis and newly produced tissue is essential to revealing the molecular basis of regeneration. RNA in situ hybridization (ISH) is a key method for studying gene expression patterns both during homeostasis and regeneration . Unlike bulk and single-cell RNA-sequencing methods, ISH provides extensive detail by visualizing gene expression patterns in their native tissue contexts . Furthermore, because this method does not require transgene expression, it can be performed on wildtype research organisms that do not yet have developed genetic toolkits. As such, it is particularly useful for research questions being pursued in diverse research organisms . The freshwater planarian S. mediterranea can regrow a complete animal from a body fragment that is less than 1% of its original size . This remarkable capacity for regeneration has attracted the attention of generations of biologists. Its study has required the development of methods to detect, measure, and visualize the cells and molecules underpinning regeneration. ISH has been a primary tool for studying the biology of planarian stem cells and regeneration . Yet, current ISH protocols have several shortcomings. Penetration of probes into tissue for whole-mount in situ hybridization (WISH) is difficult to achieve. As such, permeability is increased through tissue digestion with proteinase K and through aggressive treatment with the mucolytic agent N-acetyl cysteine (NAC) . These harsh treatments can damage or destroy delicate tissues and often result in the shredding of both the epidermis and the regeneration blastema (the fragile unpigmented tissue at the wound edge which gives rise to lost body parts). Moreover, immunological assays could be weak on samples prepared by this protocol, likely because proteinase digestion disrupts target epitopes. Other protocols have been developed for fixing whole planarians that preserve the gross anatomical structures and perform well in immunological assays, but those methods are not compatible with ISH . An ideal method would preserve delicate tissues and permit the simultaneous analysis of RNA and protein expression patterns. Here, we present a new fixation protocol for ISH and immunofluorescence in planarians. We have combined approaches from several fixation techniques into a Nitric Acid/Formic Acid (NAFA) strategy for sample preparation that better preserves the delicate epidermis and blastema than previous methods do . This NAFA protocol does not include a protease digestion, providing increased compatibility with immunological assays, while not compromising ISH signal. We also show this protocol can be easily adapted for ISH studies in the regenerating killifish tail fin. Thus, the protocol is potentially applicable to a wide range of species and particularly facilitates the study of delicate tissues via ISH and immunofluorescence. We sought to create a new fixation protocol for planarians that would be compatible with both ISH and antibody-based assays while preserving the structural integrity of the animals. We reasoned that combining the acid treatment strategies of a variety of protocols could make the samples compatible with multiple applications . We also included the calcium chelator ethylene glycol-bis(β-aminoethyl ether)-N,N,N′,N′-tetraacetic acid (EGTA) to inhibit nucleases and preserve RNA integrity during sample preparation . To determine the extent to which the new combination of acids preserved the samples, we used the integrity of the epidermis as a proxy for tissue preservation, and we visualized it immunostaining cilia with an anti-acetylated tubulin antibody . We tested a N itric A cid/ F ormic A cid (NAFA) fixation and compared it against two well established fixation protocols in the field, NA (Rompolas) and N-Acetyl-Cysteine (NAC) . We found that the integrity of the epidermis is well preserved in both the NA (Rompolas) and NAFA protocols, whereas noticeable breaches of integrity were detected when the protocol using the mucolytic compound NAC was tested (Fig. ). We concluded from these results that the NAFA protocol worked as well as the NA (Rompolas) protocol and preserved the sample considerably better than the NAC protocol did. Given the success of the anti-acetylated tubulin antibody staining, we tested whether the NAFA protocol could be used for ISH assays. To ensure the NAFA protocol allows antisense RNA probe penetration into tissues, we chose genes known to mark the internal neoblast cell population ( piwi-1 ), and a more external cell population, a subset of the epidermal progenitors ( zpuf-6 ) . First, we tested whether the expression of piwi-1 and zpuf-6 could be detected via chromogenic WISH (Fig. ). While the NAFA and NAC protocols produced indistinguishable patterns of expression for the two genes, we could not observe any piwi-1 and zpuf-6 signal with the NA (Rompolas) protocol (Fig. A, B). These experiments also revealed epidermal damage when NAC was used (Fig. B). To further investigate epidermal integrity and WISH signal, we performed chromogenic WISH for zpuf-6 using the NAC and NAFA protocols (Additional file 1: Fig. S1) then sectioned the animals afterwards for histological analysis. The sections revealed that the outermost layer with zpuf-6 + cells was intact when using the NAFA protocol but damaged by the NAC protocol (Fig. S1A and S1B). Also, we tested whether three different carboxylic acids (formic acid, acetic acid, and lactic acid) can be used in the NAFA protocol. We performed chromogenic WISH for piwi-1 , zpuf-6 , in addition to markers of the central nervous system ( pc2 ) , and gastrovascular system ( porcupine ) . All showed similar expression patterns in both the NAFA and NAC protocols (Additional file 2: Fig. S2). While all three carboxylic acids can be used to determine gene expression patterns and are effective across multiple transcripts, we chose to use formic acid because it has the simplest chemical structure. We conclude from these findings that the new NAFA protocol both preserves epidermis integrity and can be used to detect gene expression in different planarian tissues via WISH. Next, we investigated whether we could use the new NAFA protocol in planaria to carry out fluorescent in situ hybridization (FISH) in tandem with immunostaining. Using confocal microscopy, we detected the neoblast and epidermal progenitor markers piwi-1 and zpuf-6 , respectively (Fig. ). The intensity of the piwi-1 fluorescent signal was indistinguishable between the NAC and NAFA protocols but much weaker for the NA (Rompolas) protocol (Additional file 3: Fig. S3). Furthermore, confocal microscopy showed that the epidermis was damaged with the NAC protocol but was not visibly affected when using the NAFA protocol (Fig. B). After whole-mount FISH, we immunostained for mitotic cells with an antibody that recognizes the Serine-10 phosphorylated form of histone H3 (anti-H3P) . While we did not observe statistically significant differences in H3P density among the protocols (Additional file 4: Fig. S4A), the anti-H3P antibody showed brighter signal with the NAFA protocol when compared to both Rompolas and NAC protocols (Fig. A, Additional file 4: S4B, S4C). Therefore, NAFA is highly compatible with tandem FISH and immunostaining. Next, we sought to more thoroughly characterize the ability of the three protocols to label external and internal tissues by immunofluorescence using antibodies against acetylated tubulin and Smed-6G10 . As in prior experiments, the NA (Rompolas) and NAFA protocol preserved the cilia while they were damaged in the NAC protocol (Fig. B). In case of the muscle antibody, we observed that all the three protocols produced qualitatively similar staining pattern (Fig. C). However, NAC treatment sometimes damaged the body wall musculature resulting in inconsistent stainings when compared to the NAFA protocol (Additional file 5: Fig. S5). The NAFA protocol retained tightly packed evenly spaced muscle fibers, outermost circular muscle fibers, while NAC treatment disrupted the integrity of the muscle fibers and at places lost the circular fibers (Additional file 6: Fig. S6A). To further compare the muscle staining between the NAC and NAFA protocols, we imaged the internal gut musculature and observed that the NAC protocol produced crisper stainings compared to the NAFA protocol (Additional file 6: Fig. S6B). Similarly, we evaluated both protocols’ compatibility with staining protonephridia, another internal structure which is also labeled by the anti-acetylated tubulin antibody. This approach allowed us to compare external vs. internal staining using the same antibody. We observed similar staining of protonephridia in both the protocols, but the epidermal cilia were damaged in the NAC protocol while the NAFA protocol preserved the cilia (Additional file 6: Fig. S6C and S6D). Thus, the NAFA protocol is well suited to studying fragile external structures and most internal structures. We then assessed if we could use the new NAFA protocol to develop two-color FISH with two different RNA probes using piwi-1 and zpuf-6 . Because the NA (Rompolas) protocol is not compatible with ISH, we only compared the NAFA and NAC protocols to each other. First, we detected zpuf-6 gene expression followed by piwi-1 . We used confocal microscopy to image the samples and observed similar expression patterns of piwi-1 in both protocols. However, the NAFA protocol showed a clearer expression pattern of the epidermal progenitor zpuf-6 likely because the integrity of the epidermis was preserved (Fig. A, B). After the double FISH, we explored the mitotic cells in the same samples using anti-H3P antibody. We observed comparable densities of H3P nuclei for both the protocols (Fig. A, B, and Additional file 7: Fig. S7). Therefore, NAFA is compatible with two-color FISH and immunostaining. To confirm that the NAFA protocol preserves the epidermis even after double FISH of piwi-1 and zpuf-6 , we subsequently performed immunostaining for cilia. The confocal images of the dorsal and ventral sides of planarians after two-color FISH showed well preserved cilia with the NAFA protocol. In contrast, we failed to detect the same pattern of cilia in planarians treated with NAC protocol (Fig. A, B). Hence, the NAFA protocol not only preserves the internal structures akin to the NAC protocol but also maintains epidermal integrity even after the strenuous protocol of labeling two separate transcripts and a protein. We next tested if we could use the NAFA protocol to study the wounding response during planarian regeneration without damaging the fragile epidermis or nascent blastema tissues. We performed FISH of piwi-1 and the immunostaining of cilia on trunk fragments 8 h post amputation (hpa) and at 1, 2, 4, and 8 days post amputation (dpa) to assay for epidermal integrity (Fig. A, B). Confocal images showed that epidermal integrity was compromised by the NAC protocol, while the NAFA samples had very clear staining of cilia on trunks throughout regeneration (Fig. A, B). Remarkably, while the piwi-1 FISH pattern was similar between the NAFA and NAC protocols, the NAFA-fixed fragments exhibited an area of undifferentiated tissue that could not be detected in the NAC fragments (compare white arrows in Fig. A to red arrows in Fig. B). We next imaged the blastema at higher magnification with confocal microscopy. These images reinforced that the NAFA protocol preserves the wound epidermis and the blastema, while it was heavily damaged by the NAC protocol (Fig. C and Additional file 8: Fig. S8). To independently verify preservation of the wound epidermis when using the NAFA protocol, we carried out Acid Fuchsin Orange G staining (AFOG). Cryosections of animals fixed with the NAC protocol showed extensive damage to the epidermis, while the NAFA-treated samples had well organized epidermis with tall cells and distinct basal lamina (red arrows) (Additional file 9: Fig. S9A and Fig. S9B). The wound epidermis (8 hpa) was damaged and at times lost in NAC-treated sections but was retained in NAFA-treated sections (Additional file 9: Fig. S9A). Similarly, the blastema at 4 dpa was better preserved upon NAFA treatment (Additional file 9: Fig. S9B). Taken together, the data show that the NAFA protocol is well suited to study wounding responses and blastema formation during regeneration. Given the NAFA protocol’s superior preservation of delicate tissues in planaria, we next sought to determine if it can be adapted to study regeneration responses in other organisms. Current ISH protocols have performed poorly for probing gene expression changes in large whole-mount samples, particularly those involving the establishment of wound epidermis and a regeneration blastema in adults (e.g., the teleost caudal fin). The short-lived African killifish Nothobranchius furzeri can regenerate appendages and even organs such as the heart after injury, making them ideally suited to investigate tissue regeneration in adult animals . However, WISH experiments on the regenerating killifish tail fin can be difficult due to high variability and low signal to noise ratio . To test whether the use of formic acid during fixation can facilitate robust WISH signal development, amputated killifish tail fins were fixed using 4% paraformaldehyde (PFA) with or without formic acid at 1 and 3 dpa (Fig. A). ISH for an early blastema gene follistatin-like-1 (fstl1) showed that the use of formic acid in the fixative increased the signal-to-noise ratio resulting in intense signal at the site of injury. In contrast, the fstl1 signal in samples fixed without formic acid was masked by background noise (Fig. B, C). Similar results were observed for a blastema gene, wnt10a , in 3 dpa samples (Additional file 10: Fig. S10). These results demonstrate that adding formic acid to the fixative can enhance ISH signals in regenerating fish fins, facilitating global analysis of gene expression dynamics. Furthermore, it highlights the robustness of the NAFA protocol and shows that it can be easily adapted to a variety of tissues and organisms. Preservation of external tissue layers is especially important for a research organism used to study regeneration, because stem cell proliferation and differentiation take place just beneath the wounding epidermis and form a blastema which grows to replace lost tissues . Current ISH protocols facilitate probe penetration with harsh chemical treatments which damage delicate tissues, such as the ciliated epidermis in planarians and blastemas in both vertebrates and invertebrates. These same treatments can also damage or eliminate epitopes necessary for immunostainings. The new NAFA protocol addresses these shortcomings and allows for performing immunofluorescence and ISH on the same samples while preserving the delicate outer cellular layers of the planarian S. mediterranea . The use of formic acid fixative also enhanced ISH results in the regenerating tail fin of the African killifish N. furzeri . The greatly improved tissue integrity and increased signal to noise ratio provided by the NAFA protocol will enable researchers to investigate gene expression changes during wound healing and blastema formation. The NAFA protocol, like the NA (Rompolas) protocol, is highly compatible with immunofluorescence. Both protocols use nitric acid during fixation, which is known to euthanize and flatten planarians while preserving the ciliated epidermis . However, use of nitric acid alone is not sufficient to enable detection of gene expression by ISH. To develop a protocol that is compatible with both ISH and immunofluorescence, we explored the use of carboxylic acids, which are widely used in a variety of fixation approaches . These methods are a subset of a broader class called coagulant fixatives which act by precipitating proteins instead of covalently crosslinking them . Acid treatments enhance immunohistochemical studies by hydrolyzing crosslinks and potentially disrupting protein complexes, in a process known as antigen retrieval . In contrast, the NAC protocol uses enzymatic proteinase K treatment to permeabilize the sample. While immunofluorescence signals can be generated from this method, these signals are much weaker at times than those produced by the NAFA or NA (Rompolas) methods, presumably due to the loss of target epitopes by enzymatic digestion. Furthermore, the harsh mucolytic NAC treatment tears the outer layers of the planarian body, making it difficult to use for studying fragile tissues such as the epidermis and regeneration blastema. The NAFA protocol is also highly compatible with in situ hybridization, in stark contrast to the NA (Rompolas) protocol. Three main possibilities exist to account for this compatibility: (1) that samples fixed using the NAFA protocol are more permeable to riboprobes than samples fixed by the NA (Rompolas) protocol, (2) that RNA targets are more available to ISH probes than they are in other coagulating fixation conditions, or (3) that target RNA molecules are better preserved by NAFA than they are with harsher acid treatments. Below, we evaluate the likelihood of each of these three possibilities. First, samples fixed with the NA (Rompolas) protocol are sufficiently permeabilized to allow antibodies to penetrate to internal structures detectable by immunofluorescence, yet in situ hybridization fails on these samples. While the structures of specific antisense mRNA probes are unknown, the relatively short probes used in this study still do not yield any appreciable signal with the NA (Rompolas) protocol. This suggests that sample permeability may not explain NAFA’s superior performance in ISH. Because size affects diffusion rate and riboprobe penetration, a systematic study with probes of varying lengths is necessary to assess permeabilization in samples fixed by each method. Second, relative to prolonged strong acid treatments, such as the NA (Rompolas) protocol, the proteins in NAFA samples will likely not be hydrolyzed to the same extent, and will also be crosslinked, two factors which would be expected to increase the size and complexity of proteins bound to and around RNA molecules. Since NAFA fixation likely leads target RNA molecules to be bound or surrounded by networks of crosslinked proteins, we hypothesize that increased RNA availability to probes is another unlikely explanation for the compatibility of NAFA with ISH. Third, compared to the NA (Rompolas) protocol, NAFA’s much briefer nitric acid treatment almost certainly results in less acid hydrolysis of RNA. Furthermore, the NAFA protocol includes EGTA to chelate calcium ions, as many RNase enzymes require these to digest RNA molecules . Of the three possibilities for the NAFA protocol’s compatibility with ISH, we posit that preservation of RNA integrity is the most likely explanation. The benefits of the NAFA protocol are likely due to the unique approach of simultaneously performing crosslinking and carboxylic acid treatments. As we devised this method, we tested three carboxylic acids for their performance in ISH and chose formic acid, which is chemically the smallest and simplest carboxylic. Formic acid is the strongest of the three acids tested in this study. It is unknown whether other untested carboxylic acids would perform better on ISH in planarians. However, for aliphatic carboxylic acids such as the ones tested here, increasing length of the carbon chain is inversely proportional to acid strength, so we expect other acids would be unlikely to produce the full benefits created by the formic acid treatment of the NAFA protocol. Furthermore, carboxylic acids with long aliphatic carbon chains have detergent-like properties, making them potentially unsuitable for fixing tissue samples. The NAFA protocol can be used for preparing whole-mount planarian samples for immunofluorescence, ISH, and tissue sections for histological stainings like AFOG. The combination of using a carboxylic acid like formic acid in the fixative also improved ISH signal in the killifish tail fin indicating the ease of adapting this protocol for a wide variety of research organisms. Given the success of the NAFA protocol in traditional ISH protocol with long riboprobes, it is likely compatible with Hybridization Chain Reaction v3.0 (HCR) which uses multiple short RNA probes . Future studies will determine the compatibility of NAFA fixation with HCR. Because it preserves the integrity of the ciliated epidermis in planarians, this method may be useful for the study of other samples with multiciliated cells, such as the lung epithelium, oviduct, and inner ear. Future work will explore the applicability of the NAFA protocol in a diverse array of samples and research organisms. We describe a fixation protocol using nitric acid and formic acid (NAFA) which preserves the fragile tissues such as the planarian regeneration blastema and epidermis. NAFA protocol is compatible with a variety of downstream assays such as in situ hybridization, immunofluorescence, and histological stainings. The protocol was also easily adapted to probe for gene expression in the regenerating killifish tail fin. Thus, the method promises to be broadly applicable for a variety of tissues and research organisms. Animal husbandry Asexual Schmidtea mediterranea planarians were grown in 1 × Montjuic water in recirculating systems or static cultures in Tupperware boxes at 20 °C . When maintained in static cultures 1 × Montjuic water was supplemented with gentamycin (50–100 µg/mL). Animals were fed with either beef liver chunks or puree, 1–3 times a week. Animals were starved for at least 1 week before use in experiments . The inbred strain GRZ of the African turquoise killifish Nothobranchius furzeri were grown at 26 °C, and caudal fin amputation was carried out as described previously . All vertebrate work was performed according to the protocols approved by the Stowers Institute for Medical Research Institutional Animal Care and Use Committee. Riboprobe synthesis Hapten-labeled antisense RNA probes were synthesized with a few modifications to the previously published protocol . Up to 1 μg of PCR-amplified DNA templates were used for T7 based in vitro transcription reaction to generate antisense RNA sequences. Probes were synthesized for either 2 h or overnight at 37 °C in a thermocyler using digoxigenin (DIG), fluorescein, or DNP-labeling mix. Template DNA was degraded by incubating the reaction with RNase-free DNase for 45 min at 37 °C. Riboprobes were precipitated at − 80 °C for 1 h in 0.5 volumes of 7.5 M ammonium acetate and 2 volumes of ice-cold ethanol. RNA pellet was obtained by centrifugation at 14,000 rpm for 30 min at 4 °C. RNA pellet was washed in 75% ethanol and air dried before resuspending in 100 μL of deionized formamide. We generally used these riboprobes at 1:1000 dilution in ISH experiments. NA (Rompolas), NAC, and NAFA fixation Fixation with NA (Rompolas) protocol was carried out as described before with the following modifications: fixation with relaxant solution was carried out for 16 h at RT. Animals were washed in PBS and post-fixed with 4% paraformaldehyde in PBS for 10 min. Samples were permeabilized in 1% IGEPAL CA-360 for 10 min and washed with PBSTx prior to carrying out ISH or immunostaining experiments. Animals were fixed using NAC protocol as described previously . Briefly, animals were euthanized in 5% NAC for 5 min and fixed in 4% formaldehyde for 45 min. Animals were dehydrated in methanol and stored in − 20 °C at least overnight and up to several months. When ready to use for the experiments, samples were rehydrated in PBSTx and bleached using formamide bleach for 2 h. Animals were permeabilized with proteinase K for 10 min and post-fixed with 4% formaldehyde for 10 min. Following two 10-min washes with PBSTx, samples were continued with either ISH or immunostaining procedures. In NAFA fixation, animals were euthanized in NA solution and fixed in FA solution for 45 min. Following fixation, animals were dehydrated in methanol and stored in − 20 °C until ready for use. Animals were rehydrated and bleached in formamide bleach for 2 h before continuing with either ISH or immunostaining. The detailed step-by-step protocol for NAFA fixation is provided in Additional files 11–15. All the recipes for solutions used in the protocol are described in Additional file 16. All chemicals used in the study are listed in Additional file 17: Supplementary Table 1. ISH and immunostaining Animals fixed with the three different methods were treated identically for ISH and immunostaining following previously published protocols . Fluorescently conjugated tyramides were synthesized from N-hydroxy-succinimidyl esters as previously described . The detailed step-by-step protocols for ISH and immunostaining are provided in Supplementary Files 1A-1E. Histological sectioning and AFOG staining WISH-stained animals were cryosectioned at 7 µm thickness as described previously . For Acid Fuchsin Orange G (AFOG) staining, fixed samples were embedded in paraffin and processed into 10-μm-thick sections. AFOG staining was carried out as previously described . Imaging Colorimetric WISH samples were imaged on Leica M205 stereo microscope. Fluorescent images were taken on a Zeiss confocal microscope or Nikon Spinning disk and processed in Fiji . For Figs. and , animals were mounted either dorsally or ventrally to capture surface ciliary patterns. H3P densities were determined from maximum intensity projections as described before . H3P intensity was determined by the brightness of each focus identified by Fiji’s “Find maxima” function. Average piwi-1 intensity was calculated from maximum intensity projections. Asexual Schmidtea mediterranea planarians were grown in 1 × Montjuic water in recirculating systems or static cultures in Tupperware boxes at 20 °C . When maintained in static cultures 1 × Montjuic water was supplemented with gentamycin (50–100 µg/mL). Animals were fed with either beef liver chunks or puree, 1–3 times a week. Animals were starved for at least 1 week before use in experiments . The inbred strain GRZ of the African turquoise killifish Nothobranchius furzeri were grown at 26 °C, and caudal fin amputation was carried out as described previously . All vertebrate work was performed according to the protocols approved by the Stowers Institute for Medical Research Institutional Animal Care and Use Committee. Hapten-labeled antisense RNA probes were synthesized with a few modifications to the previously published protocol . Up to 1 μg of PCR-amplified DNA templates were used for T7 based in vitro transcription reaction to generate antisense RNA sequences. Probes were synthesized for either 2 h or overnight at 37 °C in a thermocyler using digoxigenin (DIG), fluorescein, or DNP-labeling mix. Template DNA was degraded by incubating the reaction with RNase-free DNase for 45 min at 37 °C. Riboprobes were precipitated at − 80 °C for 1 h in 0.5 volumes of 7.5 M ammonium acetate and 2 volumes of ice-cold ethanol. RNA pellet was obtained by centrifugation at 14,000 rpm for 30 min at 4 °C. RNA pellet was washed in 75% ethanol and air dried before resuspending in 100 μL of deionized formamide. We generally used these riboprobes at 1:1000 dilution in ISH experiments. Fixation with NA (Rompolas) protocol was carried out as described before with the following modifications: fixation with relaxant solution was carried out for 16 h at RT. Animals were washed in PBS and post-fixed with 4% paraformaldehyde in PBS for 10 min. Samples were permeabilized in 1% IGEPAL CA-360 for 10 min and washed with PBSTx prior to carrying out ISH or immunostaining experiments. Animals were fixed using NAC protocol as described previously . Briefly, animals were euthanized in 5% NAC for 5 min and fixed in 4% formaldehyde for 45 min. Animals were dehydrated in methanol and stored in − 20 °C at least overnight and up to several months. When ready to use for the experiments, samples were rehydrated in PBSTx and bleached using formamide bleach for 2 h. Animals were permeabilized with proteinase K for 10 min and post-fixed with 4% formaldehyde for 10 min. Following two 10-min washes with PBSTx, samples were continued with either ISH or immunostaining procedures. In NAFA fixation, animals were euthanized in NA solution and fixed in FA solution for 45 min. Following fixation, animals were dehydrated in methanol and stored in − 20 °C until ready for use. Animals were rehydrated and bleached in formamide bleach for 2 h before continuing with either ISH or immunostaining. The detailed step-by-step protocol for NAFA fixation is provided in Additional files 11–15. All the recipes for solutions used in the protocol are described in Additional file 16. All chemicals used in the study are listed in Additional file 17: Supplementary Table 1. Animals fixed with the three different methods were treated identically for ISH and immunostaining following previously published protocols . Fluorescently conjugated tyramides were synthesized from N-hydroxy-succinimidyl esters as previously described . The detailed step-by-step protocols for ISH and immunostaining are provided in Supplementary Files 1A-1E. WISH-stained animals were cryosectioned at 7 µm thickness as described previously . For Acid Fuchsin Orange G (AFOG) staining, fixed samples were embedded in paraffin and processed into 10-μm-thick sections. AFOG staining was carried out as previously described . Colorimetric WISH samples were imaged on Leica M205 stereo microscope. Fluorescent images were taken on a Zeiss confocal microscope or Nikon Spinning disk and processed in Fiji . For Figs. and , animals were mounted either dorsally or ventrally to capture surface ciliary patterns. H3P densities were determined from maximum intensity projections as described before . H3P intensity was determined by the brightness of each focus identified by Fiji’s “Find maxima” function. Average piwi-1 intensity was calculated from maximum intensity projections. Additional file 1: Supplementary Fig. S1. Epidermal integrity is preserved with NAFA protocol. Chromogenic in situ of zpuf-6 (epidermal progenitor). Transverse histology section taken anterior to the pharynx. (A) Samples fixed with the NAC protocol. Black arrows indicate damage to the epidermal layer. (B) Samples fixed with the NAFA protocol. No disruptions to the epidermis are visible. Brightfield images were taken with a stereomicroscope. Scale bars: 100 μm. Additional file 2: Supplementary Fig. S2. Different carboxylic acids tested in optimization of a new in situ protocol. Chromogenic WISH of piwi-1 , zpuf-6 , pc2 , and porcupine . (A) NAC, (B) formic acid (4.8%), (C) acetic acid (4.9%) and (D) lactic acid (4.2%). Brightfield images were taken with a stereomicroscope. Scale bars: 100 μm. Additional file 3: Supplementary Fig. S3. FISH signal intensities are comparable between NAFA and NAC protocols. (A) Mean intensity of piwi-1 FISH signal was calculated from the max projections represented in Fig. . Plot shows box and whisker plot for three animals per condition. P -values were calculated with Student’s t -test. Additional file 4: Supplementary Fig. S4. NAFA protocol yields brighter H3P signal without changing the density of dividing cells. (A) Comparison of H3P + nuclei per square millimeter. (B) Comparison of mean fluorescence intensity of H3P + nuclei from each animal. Box and whisker plots show median values and interquartile ranges. N = 3 animals per condition. P -values for (A) and (B) were calculated with Student’s t -test. (C) Representative images of H3P stainings. Top row: all images shown with same brightness/contrast settings, optimized to NAFA. Bottom row: each image shown with custom settings. Images are max projections of confocal stacks, scale bars: 50 μm. Additional file 5: Supplementary Fig. S5. Consistent labeling of muscle fibers by the NAFA protocol. (A) Max projections of confocal image stacks of animals immunostained with the muscle antibody Smed-6G10. Images are arranged with anterior to the left for all animals. For rows 1–2 the dorsal surface is visible, while for rows 3–6 the ventral surface and mouth are visible. All six animals processed for each condition are shown, scale bars: 200 μm. Additional file 6: Supplementary Fig. S6. Immunostaining of internal structures by the NAFA protocol. Maximum intensity projections of confocal image substacks to specifically visualize external and internal structures. (A) 40 × magnification image of the body wall musculature stained by Smed-6G10. Scale bars: 50 μm. (B) Upper: Maximum intensity projection of 3 z-stacks of whole-mount immunostaining showing gut musculature. Scale bars: 200 μm. Lower: Maximum intensity projection of sub-stacks of 40 × magnification image of gut musculature in tail stripe region, posterior to pharynx. Scale bars 50 μm. (C) Maximum intensity projection of top 15 microns of 40 × magnification immunofluorescence images of the ventral ciliated epidermis. Upper: anti-acetylated tubulin (gray), lower: merge with DAPI (blue). (D) Similar to C), for different substacks to highlight protonephridia staining by anti-acetylated tubulin. All scale bars for C) and D) are 50 μm. Additional file 7: Supplementary Fig. S7 Densities of mitotic cells are comparable between NAFA and NAC protocols. Number of H3P + nuclei were counted and divided by the area of the worm to obtain density. These number are from the max projection images represented in Fig. . Each dot represents an animal. P -values were calculated with Student’s t -test. Additional file 8: Supplementary Fig. S8. NAFA protocol maintains epidermal and blastema integrity during regeneration. (A) DAPI staining. (B) FISH of zpuf-6 and DAPI staining. (C) FISH of zpuf-6 and immunostaining of anti-acetylated tubulin (cilia). (B and C) White arrows show the affected epidermal layer and red arrows at the blastema show epidermal integrity during regeneration. Maximum intensity projection of confocal images (40X). Ant: anterior and Post: posterior. Regenerating trunk fragments. Scale bars are 50 μm. Additional file 9: Supplementary Fig. S9. NAFA protocol is compatible with histological staining and maintains epidermal integrity during regeneration. (A) 8 hpa longitudinal sections stained with AFOG. Magnified images of the area around the wound marked by yellow dotted box are shown. (B) 4 dpa longitudinal sections stained with AFOG. Areas of the zoomed in images are highlighted by dashed boxes. Red arrows mark the epidermis. Brightfield images were taken with a compound microscope. Scale bars for whole-mounts are 100 μm, 50 μm for insets. Additional file 10: Supplementary Fig. S10. Expression of wnt10a at the blastema is better observed when the tissue is fixed in the presence of formic acid. (A) Chromogenic in situ hybridization for the injury-responsive gene wnt10a. 3 dpa tail fins fixed with or without formic acid and probed for wnt10a. Amputation site is indicated by red dashed line. Brightfield images were taken with a stereomicroscope. Scale bars are 500 μm. Additional file 11: Detailed step by step protocol describing colorimetric WISH with NAFA fixation. Additional file 12: Detailed step by step protocol describing FISH using the NAFA fixation. Additional file 13: Detailed step by step protocol describing FISH and immunostaining protocol with NAFA fixation. Additional file 14: Detailed step by step protocol describing whole mount immunofluorescence staining using the NAFA fixation. Additional file 15: Detailed step by step protocol for colorimetric WISH on killifish fins using the NAFA protocol. Additional file 16: Details of solutions used for fixation, WISH, and immunofluorescence. Additional file 17: Supplementary Table 1 – Vendor information and catalog numbers for all the reagents required to carry out NAFA fixation, WISH, and immunofluorescence. |
Identifying, Understanding, and Addressing Disparities in Glaucoma Care in the United States | 2c72b329-f11c-4416-8835-41a450a4c978 | 10617640 | Ophthalmology[mh] | Health disparities are defined as preventable differences in the burden of disease that are linked with social, economic, and/or environmental disadvantage. These disparities result from a wide range of underlying factors, including health insurance coverage and affordability, access to and utilization of care, and quality of care. Many health disparities are rooted in social and economic inequalities that lead to unequal distribution of resources and opportunities. Health disparities are prevalent in all medical disciplines but are commonly discussed in the context of chronic disease, such as diabetes, hypertension, and heart disease, as well as disability, cancer, substance use, infant and maternal mortality, and overall life expectancy. Disparities in clinical care can magnify an underlying genetic predisposition for eye disease, such as the higher incidence of diabetic retinopathy and glaucoma among Black and Hispanic individuals compared to non-Hispanic White individuals. Health disparities are closely interconnected to social determinants of health (SDOHs), defined by the US Department of Health and Social Services as “conditions in the environment where people are born, live, work, play, worship, age, and thrive that affect a wide range of health, functioning, and quality-of-life outcomes and risks.” Factors such as income, education, employment, housing, and access to healthcare services are all social determinants that can influence access to quality health care, susceptibility to illness, and ability to manage health conditions. In an investigation of differences in health outcomes on a county level, such as length and quality of life, clinical care accounted for 20% of the variation in outcomes, whereas SDOHs accounted for as much as 50%. People's abilities to understand their health status and navigate health information are strongly influenced by socioeconomic status (SES), education, access to resources, and social support networks. Historically, lower health literacy has contributed to “suboptimal use of preventive services, delays in diagnosis, higher rates of hospitalization, and increased risk of mortality among adults.” , People's perceptions of their own health status also play a role in their health outcomes. Although self-reported health is generally correlated with actual health status, some research has demonstrated that a disconnect between the two can lead to a reduction in participation in follow-up care, adherence to medications, and lifestyle modifications. Furthermore, the patient–physician relationship is essential for optimal, equitable health care. Implicit or explicit provider biases associated with SDOHs, however, can compromise this relationship and deepen patient mistrust of medical care. Glaucoma is the leading cause of irreversible vision loss worldwide, currently affecting 80 million people total and 3 million people in the United States. The global prevalence of glaucoma among people 40 to 80 years old is predicted to rise rapidly from 76 million in 2020 to 111.8 million in 2040 due to aging of the population. Glaucoma costs the US healthcare system an estimated $2.5 billion annually, with $1.9 billion in direct costs and $0.6 billion in indirect costs. Glaucoma often serves as a representative disease for disparities research and healthcare models, given its broad impact and chronic nature. The disease course involves longitudinal care requiring regular clinical exams, diagnostic testing, and medical and surgical interventions, including intraocular pressure (IOP)-lowering eye drops, laser treatment, and glaucoma surgery. Studying glaucoma in this context may provide a global perspective on eyecare access and utilization, which are key factors that directly contribute to patient outcomes and healthcare disparities. Vision loss caused by glaucoma is irreversible but largely preventable, making early detection and management critical. This begs the question of why up to 8.9% individuals with primary open-angle glaucoma (POAG) and up to 27% with primary angle-closure glaucoma (PACG) are affected by blindness despite screening efforts and effective medical and surgical interventions. Many studies have focused on identifying SDOHs that contribute to disparities in glaucoma care and disease outcomes. This article synthesizes recent research on disparities in glaucoma care and disease outcomes and highlights promising methods and approaches taken to address these disparities. Racial Disparities in Glaucoma Screening, Treatment, and Outcomes Before reviewing glaucoma disparities among different racial groups, it is important to discuss the role of race as a social construct rather than biological or genetic attribute. Most studies cited in this paper defined race based on participants’ self-reported identification, which is primarily based on the sociopolitical category with which they identify rather than their genetic background and heritage. These two definitions of race—as a sociopolitical category or marker of ancestry—are closely intertwined in the scientific literature, rendering the interpretation of racial associations with glaucoma outcomes inherently complex. It is possible that the use of more precise variables, such as genetic variants, will become more accessible in the future for studying intrinsic differences in glaucoma pathogenesis and risk. However, it makes sense to maintain race as a sociopolitical category when conducting disparities research, as this definition of race is likely associated with SDOHs that play key roles in glaucoma outcomes. Through this lens, it is widely recognized that Black, Hispanic, and Asian Americans experience a higher prevalence of glaucoma when compared to non-Hispanic White Americans. – Although a single, collective study assessing glaucoma prevalence across all races and ethnicities in the United States has yet to be conducted, multiple studies have compared glaucoma prevalence in minority races and ethnicities with those of non-Hispanic White individuals. The Baltimore Eye Study, for example, reported that Black participants have around a fourfold higher prevalence (4.97%) of POAG than age-matched non-Hispanic White participants (1.44%). The Los Angeles Latino Eye Study (LALES), a population-based epidemiology study of Hispanic individuals in Los Angeles, reported a similar prevalence of POAG in Latinos (4.74%). In addition, the prevalence of PACG and normotensive glaucoma was found to be higher in individuals of Asian descent. , Researchers have turned to genome-wide association studies to investigate the genetic underpinnings of these differences in glaucoma prevalence. Through these studies, a multitude of ancestry-specific genetic variants have been associated with glaucoma risk. Although these genetic associations are too numerous to include all of them in this review, these findings suggest that there are biological differences that contribute to race-associated glaucoma phenotypes and prevalence differences. For example, a specific variant at the APBB2 locus is associated with African ancestry and increased risk of POAG, correlating with the higher prevalence of POAG observed in this population. It is important to differentiate between intrinsic biological differences in glaucoma prevalence associated with race and ethnicity and extrinsic sociopolitical factors that contribute to disparities in disease burden and outcomes among racial minorities. Examples of early identified disparities include Black Americans having a higher risk of blindness at first glaucoma diagnosis compared to non-Hispanic White Americans. – In addition, Black and Hispanic patients are more likely to need glaucoma surgery or laser treatment at the time of diagnosis. , These disparities in visual morbidity and clinical outcomes may be related to disparities in rates of glaucoma detection. For example, LALES reported that 75% of participants with POAG were previously undiagnosed, a rate that is higher than rates reported in other US populations. This finding is consistent with results of the Proyecto VER project. In that study, the rate of undiagnosed glaucoma was 62% among the Hispanic participants, which was higher than the 50% estimated occurrence among individuals of African and European ancestry. Finally, Black Americans with anatomical narrow angles are more likely to go undetected until PACG develops, even after correcting for socioeconomic factors. Another factor hindering racial equity in glaucoma diagnosis and treatment is access to care. Discussion of this problem is nuanced, as socioeconomic issues are inherently associated with race in the United States. When compared to non-Hispanic White Medicare beneficiaries, Hispanic Medicare beneficiaries attended fewer outpatient visits and received fewer optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) tests, but they had more inpatient/emergency department encounters and selective laser trabeculoplasty procedures. Similarly, Black Medicare beneficiaries attended fewer outpatient visits and received fewer visual field (VF) tests, but had more inpatient/ED encounters and surgeries. , The differences between non-Hispanic Black and White Medicare beneficiaries persisted after correcting for SES, which was defined by the low-income indicators that determine different levels of Medicare eligibility. Furthermore, non-Hispanic Black and Hispanic patients reported greater difficulty affording glaucoma medications than non-Hispanic White patients and were less likely to adhere to medication regimens. , It is also crucial to recognize that both overt and implicit biases of providers against racial minorities have impacted the patient–physician relationship and led to an entrenched mistrust in the healthcare system. As a result, patients of historically marginalized groups may have reduced rates of seeking and utilizing health care, influencing differences in visual outcomes among various racial and ethnic groups. The issue of racial disparities in glaucoma extends to the realm of scientific research and manifests as a lack of diversity in glaucoma studies. For example, in a meta-analysis that examined the demographics of 105 POAG studies conducted between 1994 and 2019, 103 of these studies consisted predominantly of non-Hispanic White participants. Black and Hispanic populations, despite experiencing a greater disease burden, were significantly underrepresented in glaucoma clinical trials, comprising only 16.8% and 3.4% of participants, respectively, whereas non-Hispanic White individuals accounted for the majority at 70.7%. Age Age is an intrinsic risk factor for many eye diseases, including glaucoma. The Centers for Disease Control and Prevention (CDC) has stated that all people over the age of 60 and specifically Black Americans over age 40 are considered at high risk for developing glaucoma, thus highlighting differences in glaucoma risk at the intersection of race and age. Black Americans were also found to have the highest POAG prevalence at all ages, with 4% and 13% prevalence in Black Americans ages 50 to 59 years and 80 to 89 years, respectively. This pattern of increased prevalence with age is seen across all races, although to different degrees. Overall glaucoma prevalence, regardless of race or gender, has been found to increase with age, ranging from 0.6% in the age category of 40 to 49 years to 8.3% in the 80+ age category. This elevated prevalence associated with age can be attributed to various factors, such as accelerated RNFL thinning. Consequently, glaucoma may develop at lower IOP levels in older patients. The confounding effect of age-related disease risk poses a challenge when investigating age-related disparities in ocular health. The prevalence of PACG also increases with older age, likely related to cataract formation and subsequent angle narrowing and closure. However, treatments for other age-related conditions, such as cataracts, can further confound detection of age-related disparities in glaucoma care. For example, cataract surgery is strongly protective against PACG; therefore, the higher risk of PACG with older age is counterbalanced by the protective effect of cataract surgery. Beyond conferring a higher risk of glaucoma, older age has been identified as a risk factor for lower participation in glaucoma screening and adherence to treatment. Patients who are younger, male, and live in an urban area are more likely to receive VF tests and more likely to receive them frequently, defined as greater than one VF test per year, which is compliant with recommendations by the American Academy of Ophthalmology. Similarly, 23% of patients 65 years or older have been found to be non-adherent (based on prescription filing and number of days without glaucoma therapy) to newly prescribed topical glaucoma agents. When categorizing adherence patterns, those who were “never adherent” to their POAG medications had the highest average age at diagnosis at 64.9 years old. The reasons for this drop in adherence with older age are multifactorial. Generally, medication adherence is inversely proportional to the number of medications. Older patients with chronic medical conditions are likely to have other medications to manage and therefore may be less likely to adhere to a daily glaucoma treatment regimen in the setting of polypharmacy. , Cognitive impairment, associated with increasing age, affects memory and executive function and can contribute to misunderstanding of medication regimens and subsequent non-adherence. Age-related differences in health have also been analyzed in the context of frailty. Frailty, defined as a “state of increased vulnerability,” is a concept that helps quantify the overall impact of disease in older individuals. Studies have found that higher frailty levels are associated with older age, increased chronic conditions, and lower SES. Frailty is also heavily intertwined with ocular health; visual impairment can influence fall risk, inpatient hospitalization, and the need for additional support with activities of daily living. Studies found that Medicare recipients with higher levels of frailty had fewer outpatient visits, less glaucoma testing, and lower rates of surgical glaucoma treatment than non-frail or pre-frail recipients. Thus, moderately to severely frail patients are less likely to be seen regularly in person for glaucoma and may present for acute inpatient rather than routine outpatient care. Gender Similar to age, gender differences in glaucoma can be attributed to a combination of intrinsic, non-preventable risk factors and extrinsic, preventable social or environmental factors associated with disease. Before exploring these disparities, it is important to distinguish between biological sex and gender. Given that our aim is to evaluate modifiable factors that may influence glaucoma burden within a social context, we are considering gender in terms of the sociopolitical category with which one identifies. Studies have attempted to explore differences in the pathophysiology underlying glaucoma between males and females. For example, female sex has been associated with faster rates of ganglion cell complex thinning over time. Other research in mice models has found that estrogen could have a protective effect on retinal ganglion cell death in vivo. However, basic science research has yet to produce definitive results that are clearly translatable to glaucoma risk or progression in humans. It is well established that women are at higher risk of PACG than men due to anatomic differences. However, evidence supporting the role of gender as a risk factor for POAG is mixed, even within specific racial and ethnic groups. For example, LALES reported a 1.73 times higher risk for POAG in Hispanic males, whereas Proyecto VER found no gender difference among Hispanic patients with POAG in Tucson and Nogales, Arizona. In contrast, in another study, the prevalence of diagnosed POAG in women was higher than in men even after adjusting for factors such as age, race, and ethnicity. It is important to note that, given the role of older age as a glaucoma risk factor and given that women tend to live longer than men, the unadjusted prevalence of glaucoma tends to be higher in women. An important contributor to gender-related disparities in glaucoma care and outcomes is that women are more likely to seek health care than men. As a result, male patients may present with disease at a more severe stage due to lower utilization of screening and preventive care earlier on in the disease course; however, young men living in urban areas are more likely to receive VF testing and at a higher frequency than young women. The gender differences extend beyond discrepancies in testing and eyecare visits. After adjusting for the number of eyecare visits in a year and the type of provider seen, men were more likely than women to be diagnosed with POAG. Given variations in gender differences across studies, it may be useful to focus on understanding the underlying effects of gender on healthcare-seeking behaviors. Before reviewing glaucoma disparities among different racial groups, it is important to discuss the role of race as a social construct rather than biological or genetic attribute. Most studies cited in this paper defined race based on participants’ self-reported identification, which is primarily based on the sociopolitical category with which they identify rather than their genetic background and heritage. These two definitions of race—as a sociopolitical category or marker of ancestry—are closely intertwined in the scientific literature, rendering the interpretation of racial associations with glaucoma outcomes inherently complex. It is possible that the use of more precise variables, such as genetic variants, will become more accessible in the future for studying intrinsic differences in glaucoma pathogenesis and risk. However, it makes sense to maintain race as a sociopolitical category when conducting disparities research, as this definition of race is likely associated with SDOHs that play key roles in glaucoma outcomes. Through this lens, it is widely recognized that Black, Hispanic, and Asian Americans experience a higher prevalence of glaucoma when compared to non-Hispanic White Americans. – Although a single, collective study assessing glaucoma prevalence across all races and ethnicities in the United States has yet to be conducted, multiple studies have compared glaucoma prevalence in minority races and ethnicities with those of non-Hispanic White individuals. The Baltimore Eye Study, for example, reported that Black participants have around a fourfold higher prevalence (4.97%) of POAG than age-matched non-Hispanic White participants (1.44%). The Los Angeles Latino Eye Study (LALES), a population-based epidemiology study of Hispanic individuals in Los Angeles, reported a similar prevalence of POAG in Latinos (4.74%). In addition, the prevalence of PACG and normotensive glaucoma was found to be higher in individuals of Asian descent. , Researchers have turned to genome-wide association studies to investigate the genetic underpinnings of these differences in glaucoma prevalence. Through these studies, a multitude of ancestry-specific genetic variants have been associated with glaucoma risk. Although these genetic associations are too numerous to include all of them in this review, these findings suggest that there are biological differences that contribute to race-associated glaucoma phenotypes and prevalence differences. For example, a specific variant at the APBB2 locus is associated with African ancestry and increased risk of POAG, correlating with the higher prevalence of POAG observed in this population. It is important to differentiate between intrinsic biological differences in glaucoma prevalence associated with race and ethnicity and extrinsic sociopolitical factors that contribute to disparities in disease burden and outcomes among racial minorities. Examples of early identified disparities include Black Americans having a higher risk of blindness at first glaucoma diagnosis compared to non-Hispanic White Americans. – In addition, Black and Hispanic patients are more likely to need glaucoma surgery or laser treatment at the time of diagnosis. , These disparities in visual morbidity and clinical outcomes may be related to disparities in rates of glaucoma detection. For example, LALES reported that 75% of participants with POAG were previously undiagnosed, a rate that is higher than rates reported in other US populations. This finding is consistent with results of the Proyecto VER project. In that study, the rate of undiagnosed glaucoma was 62% among the Hispanic participants, which was higher than the 50% estimated occurrence among individuals of African and European ancestry. Finally, Black Americans with anatomical narrow angles are more likely to go undetected until PACG develops, even after correcting for socioeconomic factors. Another factor hindering racial equity in glaucoma diagnosis and treatment is access to care. Discussion of this problem is nuanced, as socioeconomic issues are inherently associated with race in the United States. When compared to non-Hispanic White Medicare beneficiaries, Hispanic Medicare beneficiaries attended fewer outpatient visits and received fewer optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) tests, but they had more inpatient/emergency department encounters and selective laser trabeculoplasty procedures. Similarly, Black Medicare beneficiaries attended fewer outpatient visits and received fewer visual field (VF) tests, but had more inpatient/ED encounters and surgeries. , The differences between non-Hispanic Black and White Medicare beneficiaries persisted after correcting for SES, which was defined by the low-income indicators that determine different levels of Medicare eligibility. Furthermore, non-Hispanic Black and Hispanic patients reported greater difficulty affording glaucoma medications than non-Hispanic White patients and were less likely to adhere to medication regimens. , It is also crucial to recognize that both overt and implicit biases of providers against racial minorities have impacted the patient–physician relationship and led to an entrenched mistrust in the healthcare system. As a result, patients of historically marginalized groups may have reduced rates of seeking and utilizing health care, influencing differences in visual outcomes among various racial and ethnic groups. The issue of racial disparities in glaucoma extends to the realm of scientific research and manifests as a lack of diversity in glaucoma studies. For example, in a meta-analysis that examined the demographics of 105 POAG studies conducted between 1994 and 2019, 103 of these studies consisted predominantly of non-Hispanic White participants. Black and Hispanic populations, despite experiencing a greater disease burden, were significantly underrepresented in glaucoma clinical trials, comprising only 16.8% and 3.4% of participants, respectively, whereas non-Hispanic White individuals accounted for the majority at 70.7%. Age is an intrinsic risk factor for many eye diseases, including glaucoma. The Centers for Disease Control and Prevention (CDC) has stated that all people over the age of 60 and specifically Black Americans over age 40 are considered at high risk for developing glaucoma, thus highlighting differences in glaucoma risk at the intersection of race and age. Black Americans were also found to have the highest POAG prevalence at all ages, with 4% and 13% prevalence in Black Americans ages 50 to 59 years and 80 to 89 years, respectively. This pattern of increased prevalence with age is seen across all races, although to different degrees. Overall glaucoma prevalence, regardless of race or gender, has been found to increase with age, ranging from 0.6% in the age category of 40 to 49 years to 8.3% in the 80+ age category. This elevated prevalence associated with age can be attributed to various factors, such as accelerated RNFL thinning. Consequently, glaucoma may develop at lower IOP levels in older patients. The confounding effect of age-related disease risk poses a challenge when investigating age-related disparities in ocular health. The prevalence of PACG also increases with older age, likely related to cataract formation and subsequent angle narrowing and closure. However, treatments for other age-related conditions, such as cataracts, can further confound detection of age-related disparities in glaucoma care. For example, cataract surgery is strongly protective against PACG; therefore, the higher risk of PACG with older age is counterbalanced by the protective effect of cataract surgery. Beyond conferring a higher risk of glaucoma, older age has been identified as a risk factor for lower participation in glaucoma screening and adherence to treatment. Patients who are younger, male, and live in an urban area are more likely to receive VF tests and more likely to receive them frequently, defined as greater than one VF test per year, which is compliant with recommendations by the American Academy of Ophthalmology. Similarly, 23% of patients 65 years or older have been found to be non-adherent (based on prescription filing and number of days without glaucoma therapy) to newly prescribed topical glaucoma agents. When categorizing adherence patterns, those who were “never adherent” to their POAG medications had the highest average age at diagnosis at 64.9 years old. The reasons for this drop in adherence with older age are multifactorial. Generally, medication adherence is inversely proportional to the number of medications. Older patients with chronic medical conditions are likely to have other medications to manage and therefore may be less likely to adhere to a daily glaucoma treatment regimen in the setting of polypharmacy. , Cognitive impairment, associated with increasing age, affects memory and executive function and can contribute to misunderstanding of medication regimens and subsequent non-adherence. Age-related differences in health have also been analyzed in the context of frailty. Frailty, defined as a “state of increased vulnerability,” is a concept that helps quantify the overall impact of disease in older individuals. Studies have found that higher frailty levels are associated with older age, increased chronic conditions, and lower SES. Frailty is also heavily intertwined with ocular health; visual impairment can influence fall risk, inpatient hospitalization, and the need for additional support with activities of daily living. Studies found that Medicare recipients with higher levels of frailty had fewer outpatient visits, less glaucoma testing, and lower rates of surgical glaucoma treatment than non-frail or pre-frail recipients. Thus, moderately to severely frail patients are less likely to be seen regularly in person for glaucoma and may present for acute inpatient rather than routine outpatient care. Similar to age, gender differences in glaucoma can be attributed to a combination of intrinsic, non-preventable risk factors and extrinsic, preventable social or environmental factors associated with disease. Before exploring these disparities, it is important to distinguish between biological sex and gender. Given that our aim is to evaluate modifiable factors that may influence glaucoma burden within a social context, we are considering gender in terms of the sociopolitical category with which one identifies. Studies have attempted to explore differences in the pathophysiology underlying glaucoma between males and females. For example, female sex has been associated with faster rates of ganglion cell complex thinning over time. Other research in mice models has found that estrogen could have a protective effect on retinal ganglion cell death in vivo. However, basic science research has yet to produce definitive results that are clearly translatable to glaucoma risk or progression in humans. It is well established that women are at higher risk of PACG than men due to anatomic differences. However, evidence supporting the role of gender as a risk factor for POAG is mixed, even within specific racial and ethnic groups. For example, LALES reported a 1.73 times higher risk for POAG in Hispanic males, whereas Proyecto VER found no gender difference among Hispanic patients with POAG in Tucson and Nogales, Arizona. In contrast, in another study, the prevalence of diagnosed POAG in women was higher than in men even after adjusting for factors such as age, race, and ethnicity. It is important to note that, given the role of older age as a glaucoma risk factor and given that women tend to live longer than men, the unadjusted prevalence of glaucoma tends to be higher in women. An important contributor to gender-related disparities in glaucoma care and outcomes is that women are more likely to seek health care than men. As a result, male patients may present with disease at a more severe stage due to lower utilization of screening and preventive care earlier on in the disease course; however, young men living in urban areas are more likely to receive VF testing and at a higher frequency than young women. The gender differences extend beyond discrepancies in testing and eyecare visits. After adjusting for the number of eyecare visits in a year and the type of provider seen, men were more likely than women to be diagnosed with POAG. Given variations in gender differences across studies, it may be useful to focus on understanding the underlying effects of gender on healthcare-seeking behaviors. Socioeconomic Status and Education Socioeconomic status has been widely accepted as a determinant of health outcomes, with lower economic opportunities being linked to poorer health. This unfortunate reality is complicated and multifactorial in origin and is propagated by economic, physical, and sociopolitical forces such as profit-driven healthcare systems, public health disparities, neighborhood segregation, and food insecurity. Although it falls outside the scope of this review to delve deeply into the underlying causes of health inequality in the United States it is important to provide a brief discussion on key socioeconomic variables, such as income and education, that significantly impact individual health outcomes. Household income alone has been correlated with health outcomes. In general, it is accepted that people's class is a mediator of potential longevity, with income levels having a positive correlation with life expectancy. This is because one's income is a primary determinant of one's ability to access care and the quality of care one receives. Therefore, it is not surprising that an association between health expectancy and income exists even when correcting for unhealthy behavior, such as smoking, , which suggests that income alone acts as a modifier of health outcomes. With regard to glaucoma treatment and income specifically, lower annual income levels (defined as less than $60,000/year) are associated with poor adherence to treatment regimens. A study in India found that low-income individuals spend nearly 26% of their income on glaucoma treatment. The ability to afford transportation is also inherently tied to income. The most common reason for eligible individuals not showing up for a free eye clinic appointment was a lack of transportation. A similar study, which assessed factors preventing follow-up for patients enrolled in the Philadelphia Telemedicine and Glaucoma Detection and Follow-up Study, found that the main reasons for missed appointments were feeling ill (38.1%), forgetting the appointment (34.2%), lack of transportation (13.5%), and inability to miss work (7.1%). Education plays a similarly important role in shaping health outcomes. People who have completed at least a high school education exhibit higher utilization of eyecare services compared to those without this academic background. In contrast, lower education rates are commonly associated with lower SES, less robust health infrastructure, and lower levels of health literacy. , Beyond education level, physicians are significantly less likely to educate Black patients about their glaucoma than non-Black patients. This is particularly concerning given that Black patients are more likely to have severe glaucoma and VF defects compared to other racial populations. , Thus, progressing toward equal educational effort and opportunities for all patients regardless of age, gender, and race will be essential for addressing health disparities at the individual level rather than at the community or institutional level. In the Medication Adherence in Glaucoma to Improve Care (MAGIC) trial, individuals with the highest adherence to topical glaucoma medications endorsed greater knowledge about the disease as being a facilitating factor for their adherence, demonstrating how vital this physician–patient interaction is in promoting better outcomes. Other factors such as lower income and older age have also been implicated in contributing to poor health literacy rates. – As a result, health literacy is recognized as a critical determinant of health and is a target of interventions designed to improve the health education, adherence, and outcomes of patients. These same mediators (namely, access to care and education/health literacy) are also responsible for propagating glaucoma disparities. Prior research has linked lower education with a higher risk of visual impairment from cataracts, macular degeneration, and glaucoma. Poor health literacy has also been associated with poor medication adherence among glaucoma patients. , Similarly, poor comprehension of the extent or severity of one's glaucomatous disease has been associated with poor follow-up. , In the Philadelphia Telemedicine and Glaucoma Detection and Follow-up Study, 29.2% of the participants who missed their follow-up appointments reported being unaware of either their diagnosis of glaucoma or its severity. These factors help explain why individuals of lower SES have higher rates of glaucoma. Demographic data from the Michigan Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (MI-SIGHT) program found that the area deprivation index—or the composite measure of neighborhood deprivation based on an individual's address—corresponded with higher levels of positive glaucoma screening tests. Similarly, individuals with glaucoma and lower SES were less likely to have seen a physician within 12 months. This pattern is not limited to the United States. In the United Kingdom, rates of glaucoma were highest (2.4%) among those with the lowest annual income and decreased as income increased, with the lowest rate of glaucoma (0.9%) being reported in the highest income category. This discrepancy in income implies that underlying, non-modifiable factors are at play, such as race, age, or gender-related biases. Lower socioeconomic scores were also associated with later detection of glaucoma. Ultimately, affordability, continuity, and regular sources of follow-up are essential in ensuring access to eyecare services. Insurance Product Disparities Insurance plays a critical role in a patient's quality of and access to eyecare. Medicaid status correlates with other socioeconomic factors, such as financial status, housing, education, assets, and occupation. As a result, Medicaid can serve as a surrogate for assessing an individual's SES when economic status, such as income or net worth, is unavailable. Patients with Medicaid are 2.34 times more likely to not receive any glaucoma testing in the 15 months following their first glaucoma diagnosis compared to patients with commercial health insurance. In addition, almost half (48.6%) of Medicaid recipients with POAG did not receive VF and/or OCT testing, compared to only 21.5% of commercial health insurance recipients. Among glaucoma suspects identified at free health fairs for underserved communities in South Florida, those with health insurance were 1.74 times more likely seek follow-up care compared to those without insurance, adjusted for age, gender, race/ethnicity, and education level. Out of those lost to follow-up, 57% of participants cited a lack of insurance as the primary reason, highlighting the vital role insurance plays in ensuring access to health care. Additionally, Medicaid or self-paying patients incur significantly higher total and glaucoma-related costs in the first year of diagnosis compared to patients with commercial insurance. This financial burden is a major issue, particularly for Medicaid recipients who are typically representative of lower SES. Regional Disparities Density and type of providers can contribute to regional variations in the quality of eye care and glaucoma outcomes. A nationwide analysis found that counties in the South have the lowest eyecare provider availability even when adjusting for population density. This may explain why patients in Southern and Pacific regions are less likely to be detected with anatomical narrow angles prior to developing PACG. Meanwhile, the incidence and prevalence of glaucoma based on healthcare claims data are higher in New England and Mid-Atlantic regions and lower in the East South Central and Mountain Regions, even after controlling for factors such as race/ethnicity, age, access to care, number and type of providers, and number of eyecare visits. Such findings suggest that provider density does not entirely explain observed regional differences in glaucoma; overdiagnosis or underdiagnosis by providers and their respective “gold standards” of diagnosis in specific regions may also play a role. The nature of the community that a patient lives in, such as urban, suburban, or rural, also affects that patient's ability to access glaucoma care. Living in an urban area is associated with an increased likelihood and increased frequency of receiving VF tests. Transportation presents a greater barrier in non-urban communities with lower access to affordable, convenient public transportation. A study in Florida found that only 30.5% of inhabitants live within 15 minutes of ophthalmologists who are members of the American Glaucoma Society, contributing to a significant travel burden for people over the age of 65 to reach one. Socioeconomic status has been widely accepted as a determinant of health outcomes, with lower economic opportunities being linked to poorer health. This unfortunate reality is complicated and multifactorial in origin and is propagated by economic, physical, and sociopolitical forces such as profit-driven healthcare systems, public health disparities, neighborhood segregation, and food insecurity. Although it falls outside the scope of this review to delve deeply into the underlying causes of health inequality in the United States it is important to provide a brief discussion on key socioeconomic variables, such as income and education, that significantly impact individual health outcomes. Household income alone has been correlated with health outcomes. In general, it is accepted that people's class is a mediator of potential longevity, with income levels having a positive correlation with life expectancy. This is because one's income is a primary determinant of one's ability to access care and the quality of care one receives. Therefore, it is not surprising that an association between health expectancy and income exists even when correcting for unhealthy behavior, such as smoking, , which suggests that income alone acts as a modifier of health outcomes. With regard to glaucoma treatment and income specifically, lower annual income levels (defined as less than $60,000/year) are associated with poor adherence to treatment regimens. A study in India found that low-income individuals spend nearly 26% of their income on glaucoma treatment. The ability to afford transportation is also inherently tied to income. The most common reason for eligible individuals not showing up for a free eye clinic appointment was a lack of transportation. A similar study, which assessed factors preventing follow-up for patients enrolled in the Philadelphia Telemedicine and Glaucoma Detection and Follow-up Study, found that the main reasons for missed appointments were feeling ill (38.1%), forgetting the appointment (34.2%), lack of transportation (13.5%), and inability to miss work (7.1%). Education plays a similarly important role in shaping health outcomes. People who have completed at least a high school education exhibit higher utilization of eyecare services compared to those without this academic background. In contrast, lower education rates are commonly associated with lower SES, less robust health infrastructure, and lower levels of health literacy. , Beyond education level, physicians are significantly less likely to educate Black patients about their glaucoma than non-Black patients. This is particularly concerning given that Black patients are more likely to have severe glaucoma and VF defects compared to other racial populations. , Thus, progressing toward equal educational effort and opportunities for all patients regardless of age, gender, and race will be essential for addressing health disparities at the individual level rather than at the community or institutional level. In the Medication Adherence in Glaucoma to Improve Care (MAGIC) trial, individuals with the highest adherence to topical glaucoma medications endorsed greater knowledge about the disease as being a facilitating factor for their adherence, demonstrating how vital this physician–patient interaction is in promoting better outcomes. Other factors such as lower income and older age have also been implicated in contributing to poor health literacy rates. – As a result, health literacy is recognized as a critical determinant of health and is a target of interventions designed to improve the health education, adherence, and outcomes of patients. These same mediators (namely, access to care and education/health literacy) are also responsible for propagating glaucoma disparities. Prior research has linked lower education with a higher risk of visual impairment from cataracts, macular degeneration, and glaucoma. Poor health literacy has also been associated with poor medication adherence among glaucoma patients. , Similarly, poor comprehension of the extent or severity of one's glaucomatous disease has been associated with poor follow-up. , In the Philadelphia Telemedicine and Glaucoma Detection and Follow-up Study, 29.2% of the participants who missed their follow-up appointments reported being unaware of either their diagnosis of glaucoma or its severity. These factors help explain why individuals of lower SES have higher rates of glaucoma. Demographic data from the Michigan Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (MI-SIGHT) program found that the area deprivation index—or the composite measure of neighborhood deprivation based on an individual's address—corresponded with higher levels of positive glaucoma screening tests. Similarly, individuals with glaucoma and lower SES were less likely to have seen a physician within 12 months. This pattern is not limited to the United States. In the United Kingdom, rates of glaucoma were highest (2.4%) among those with the lowest annual income and decreased as income increased, with the lowest rate of glaucoma (0.9%) being reported in the highest income category. This discrepancy in income implies that underlying, non-modifiable factors are at play, such as race, age, or gender-related biases. Lower socioeconomic scores were also associated with later detection of glaucoma. Ultimately, affordability, continuity, and regular sources of follow-up are essential in ensuring access to eyecare services. Insurance plays a critical role in a patient's quality of and access to eyecare. Medicaid status correlates with other socioeconomic factors, such as financial status, housing, education, assets, and occupation. As a result, Medicaid can serve as a surrogate for assessing an individual's SES when economic status, such as income or net worth, is unavailable. Patients with Medicaid are 2.34 times more likely to not receive any glaucoma testing in the 15 months following their first glaucoma diagnosis compared to patients with commercial health insurance. In addition, almost half (48.6%) of Medicaid recipients with POAG did not receive VF and/or OCT testing, compared to only 21.5% of commercial health insurance recipients. Among glaucoma suspects identified at free health fairs for underserved communities in South Florida, those with health insurance were 1.74 times more likely seek follow-up care compared to those without insurance, adjusted for age, gender, race/ethnicity, and education level. Out of those lost to follow-up, 57% of participants cited a lack of insurance as the primary reason, highlighting the vital role insurance plays in ensuring access to health care. Additionally, Medicaid or self-paying patients incur significantly higher total and glaucoma-related costs in the first year of diagnosis compared to patients with commercial insurance. This financial burden is a major issue, particularly for Medicaid recipients who are typically representative of lower SES. Density and type of providers can contribute to regional variations in the quality of eye care and glaucoma outcomes. A nationwide analysis found that counties in the South have the lowest eyecare provider availability even when adjusting for population density. This may explain why patients in Southern and Pacific regions are less likely to be detected with anatomical narrow angles prior to developing PACG. Meanwhile, the incidence and prevalence of glaucoma based on healthcare claims data are higher in New England and Mid-Atlantic regions and lower in the East South Central and Mountain Regions, even after controlling for factors such as race/ethnicity, age, access to care, number and type of providers, and number of eyecare visits. Such findings suggest that provider density does not entirely explain observed regional differences in glaucoma; overdiagnosis or underdiagnosis by providers and their respective “gold standards” of diagnosis in specific regions may also play a role. The nature of the community that a patient lives in, such as urban, suburban, or rural, also affects that patient's ability to access glaucoma care. Living in an urban area is associated with an increased likelihood and increased frequency of receiving VF tests. Transportation presents a greater barrier in non-urban communities with lower access to affordable, convenient public transportation. A study in Florida found that only 30.5% of inhabitants live within 15 minutes of ophthalmologists who are members of the American Glaucoma Society, contributing to a significant travel burden for people over the age of 65 to reach one. Telemedicine and Online Health Education Disparities exist across all levels of glaucoma care in the United States; therefore, promoting equity and pioneering solutions to overcome healthcare barriers are of the utmost importance. The most recent stance in 2022 by the US Preventive Services Task Force (USPSTF) maintains that there is insufficient evidence to evaluate the benefits versus harms of population-based glaucoma screenings in asymptomatic adult patients. However, earlier detection of glaucoma facilitates earlier intervention, prevention of disease progression, and better quality of life and life expectancy. – Although the lack of ample research such as randomized controlled trials comparing screened and unscreened populations contributes to the current uncertainty about the benefit of glaucoma screening for the general public, early screening for at-risk populations may provide a possible solution to mitigate disparities. Telemedicine, a healthcare modality that has been gaining traction over the past decade, especially following the onset of COVID-19, – provides an alternative approach to glaucoma care delivery and screening at-risk populations. The American Telemedicine Association defines telemedicine as the use of communication technologies to improve patient health outcomes, increase access to health information, and obtain care. One example is the MI-SIGHT program, which aims to provide telemedicine-based glaucoma screening by partnering primary care–based community clinics (Hope Clinic and Hamilton Community Health Network in Michigan) with ophthalmologists from the University of Michigan. Ophthalmic technicians at community-based clinics perform glaucoma diagnostic testing on patients and send the results via electronic health records to ophthalmologists, who then provide recommendations for appropriate eye care. Patients diagnosed with glaucoma or identified as glaucoma suspects are also enrolled in a randomized controlled trial of personalized glaucoma coaching in which they receive motivational interviewing (a conversational style or technique that encourages patients to change their behaviors to benefit their health), education on glaucoma, and guidance on creating a question list for future ophthalmology appointments. Implementation of patient coaching and empowerment is imperative to improve screening outcomes, given that follow-up appointments after initial detection are crucial for continual health management. For example, Black patients tend to ask fewer questions during appointments than non-Hispanic White patients, which can be rectified by creating question prompt lists to promote active participation during appointments. Therefore, addressing disparities requires not only increased access to glaucoma screening but also educating patients on how to take full advantage of this benefit. After 1 year of operation, the MI-SIGHT program screened over 1000 patients for glaucoma, visual impairment, and diabetic retinopathy, with an overall patient satisfaction rating of 99%. It remains to be seen whether this model of free, individualized eye care and education can improve glaucoma disparities in underserved populations relative to standard methods of care delivery. Another telemedicine-based program, Alabama Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (AL-SIGHT), has adopted a similar approach by providing glaucoma screening through federally qualified health centers in rural counties of Alabama for patients regardless of insurance status. The program provides patients with IOP values > 30 mmHg with a referral to an eyecare provider within 2 weeks. After a 6-week trial, there was a 56% improvement in glaucoma knowledge and 9% improvement in attitudes toward frequent follow-up with ophthalmologists. Similarly, the New York Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (NYC-SIGHT) program directly brings eye health screening into the underserved regions of Washington Heights and Harlem. This program addresses barriers to eye care by providing free screenings and remote image reading by ophthalmologists, in addition to setting up appropriate follow-up appointments based on those reads. After 15 months, 66% of participants whose screening demonstrated a visual acuity of 20/40 or worse, IOP of 23 to 29 mmHg, or an unreadable fundus image were referred to ophthalmology for follow-up appointments, and 20% were diagnosed as glaucoma suspects or with manifest glaucoma. Moving forward, utilizing telemedicine and free community-based health screenings – to bridge disparities in underserved or rural regions of the United States may significantly improve glaucoma outcomes and overall public understanding of this disease. Health education delivered through the Internet can also enhance understanding about glaucoma and adherence to treatments. For example, one study reported that Black patients under the age of 70 would prefer to have online glaucoma educational programs (with topics such as explaining glaucoma, its implications, and the role of medications) led by ophthalmologists rather than ophthalmologic technicians or pharmacists as an option to learn about their glaucoma. However, an important consideration when providing Internet-based educational materials is that they need to be interpretable by a wide demographic. Online glaucoma materials are commonly written at the 10th- to 11th-grade reading level, which is markedly higher than the American Medical Association's recommendation to compose patient education materials at or below the seventh-grade reading level. This becomes more relevant given that patients who use outside educational sources or feel empowered to play an active role in their own care have better adherence to medications. As a result, making glaucoma education not only accessible online but also easier to understand can encourage better understanding and compliance with glaucoma treatments. For individuals who do not have readily available access to Internet-based educational modalities, free health education in the local community is also a viable option. When asked about where glaucoma educational programs should be offered, nearly 40% of respondents voted for programs at public community centers, senior citizen centers, or on television, and nearly 30% requested to have programs at local churches. Collectively, implementing both online and offline educational programs either virtually or in person within accessible local communities may reach a much broader audience. When discussing these solutions to alleviate disparities, it is important to acknowledge the financial costs and logistical aspects that are involved in implementing these programs. The cost of implementation is not a negligible amount; for example, the MI-SIGHT program costs over $100,000 per clinic for the facility, provider and technicians, and diagnostic equipment. Although these costs may seem high in isolation, these programs are actually cost effective when considered in the context of cost per case and patient volume. By strategically targeting higher volume, under-resourced communities, these programs can deliver care to those who face the highest levels of poverty and greatest need for care. At the same time, glaucoma screening must be conducted in a deliberate and appropriate manner. The USPSTF states that the benefit of glaucoma screening is not clear ; therefore, screening should remain mindful of costs and the financial burden associated with false-positive tests, especially in resource-limited safety-net healthcare settings that are unable to sustain non-productive financial burdens. Artificial Intelligence Artificial intelligence (AI), a technology that simulates intelligent behavior and analyzes data, can be utilized within the medical field to diagnose or manage diseases efficiently and can potentially address disparities in access to care and health outcomes. , Ophthalmology is a medical specialty that relies on manual evaluation of image-based data to guide patient care; therefore, the potential for AI implementation within ophthalmology is high. In fact, numerous studies have demonstrated the utility of AI in diagnosing, detecting, and monitoring the progression of ocular disease, including glaucoma and diabetic retinopathy, based on fundus and optic disc images. – With this context, it is evident that AI offers a tremendous opportunity to improve glaucoma care. When this technology is eventually integrated into standard care, such as diagnostic devices and electronic medical record systems, it will be possible to more efficiently replicate time- and cost-intensive tasks for which we currently rely on human providers. Overall, there is a large potential for this technology to bridge economic gaps and reach a wider audience. , The virtual nature of telemedicine also opens additional avenues to address eyecare access disparities using AI. For example, predictive AI models can evaluate VF tests to diagnose glaucoma with increased sensitivity and equivalent specificity compared with physicians and could ultimately be used to support or even automate the glaucoma decision-making process. Despite its immense promise, implementation of AI in clinical practice faces several challenges. First, AI models are trained on datasets derived from specific patient populations that may differ from those of a provider’s end-users; overgeneralizing AI models can produce biased recommendations and conclusions that perpetuate or even exacerbate healthcare disparities. Second, AI models are also not immune to errors in interpreting data and providing accurate recommendations ; therefore, providers should be mindful and continue to exercise sound clinical judgment when adopting novel AI tools. Third, each healthcare system differs in its capacity and resources to accommodate newly detected glaucoma patients; therefore, model sensitivity should be balanced by the need to ensure timely care for more urgent patients. Finally, issues beyond the AI program itself, such as physician and patient technology literacy, legal liability, and overall public confidence in AI, will all have to be thoroughly addressed. , , Although AI has great potential to revolutionize glaucoma care, there are many technical and ethical questions that must first be addressed. Disparities exist across all levels of glaucoma care in the United States; therefore, promoting equity and pioneering solutions to overcome healthcare barriers are of the utmost importance. The most recent stance in 2022 by the US Preventive Services Task Force (USPSTF) maintains that there is insufficient evidence to evaluate the benefits versus harms of population-based glaucoma screenings in asymptomatic adult patients. However, earlier detection of glaucoma facilitates earlier intervention, prevention of disease progression, and better quality of life and life expectancy. – Although the lack of ample research such as randomized controlled trials comparing screened and unscreened populations contributes to the current uncertainty about the benefit of glaucoma screening for the general public, early screening for at-risk populations may provide a possible solution to mitigate disparities. Telemedicine, a healthcare modality that has been gaining traction over the past decade, especially following the onset of COVID-19, – provides an alternative approach to glaucoma care delivery and screening at-risk populations. The American Telemedicine Association defines telemedicine as the use of communication technologies to improve patient health outcomes, increase access to health information, and obtain care. One example is the MI-SIGHT program, which aims to provide telemedicine-based glaucoma screening by partnering primary care–based community clinics (Hope Clinic and Hamilton Community Health Network in Michigan) with ophthalmologists from the University of Michigan. Ophthalmic technicians at community-based clinics perform glaucoma diagnostic testing on patients and send the results via electronic health records to ophthalmologists, who then provide recommendations for appropriate eye care. Patients diagnosed with glaucoma or identified as glaucoma suspects are also enrolled in a randomized controlled trial of personalized glaucoma coaching in which they receive motivational interviewing (a conversational style or technique that encourages patients to change their behaviors to benefit their health), education on glaucoma, and guidance on creating a question list for future ophthalmology appointments. Implementation of patient coaching and empowerment is imperative to improve screening outcomes, given that follow-up appointments after initial detection are crucial for continual health management. For example, Black patients tend to ask fewer questions during appointments than non-Hispanic White patients, which can be rectified by creating question prompt lists to promote active participation during appointments. Therefore, addressing disparities requires not only increased access to glaucoma screening but also educating patients on how to take full advantage of this benefit. After 1 year of operation, the MI-SIGHT program screened over 1000 patients for glaucoma, visual impairment, and diabetic retinopathy, with an overall patient satisfaction rating of 99%. It remains to be seen whether this model of free, individualized eye care and education can improve glaucoma disparities in underserved populations relative to standard methods of care delivery. Another telemedicine-based program, Alabama Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (AL-SIGHT), has adopted a similar approach by providing glaucoma screening through federally qualified health centers in rural counties of Alabama for patients regardless of insurance status. The program provides patients with IOP values > 30 mmHg with a referral to an eyecare provider within 2 weeks. After a 6-week trial, there was a 56% improvement in glaucoma knowledge and 9% improvement in attitudes toward frequent follow-up with ophthalmologists. Similarly, the New York Screening and Intervention for Glaucoma and Eye Health Through Telemedicine (NYC-SIGHT) program directly brings eye health screening into the underserved regions of Washington Heights and Harlem. This program addresses barriers to eye care by providing free screenings and remote image reading by ophthalmologists, in addition to setting up appropriate follow-up appointments based on those reads. After 15 months, 66% of participants whose screening demonstrated a visual acuity of 20/40 or worse, IOP of 23 to 29 mmHg, or an unreadable fundus image were referred to ophthalmology for follow-up appointments, and 20% were diagnosed as glaucoma suspects or with manifest glaucoma. Moving forward, utilizing telemedicine and free community-based health screenings – to bridge disparities in underserved or rural regions of the United States may significantly improve glaucoma outcomes and overall public understanding of this disease. Health education delivered through the Internet can also enhance understanding about glaucoma and adherence to treatments. For example, one study reported that Black patients under the age of 70 would prefer to have online glaucoma educational programs (with topics such as explaining glaucoma, its implications, and the role of medications) led by ophthalmologists rather than ophthalmologic technicians or pharmacists as an option to learn about their glaucoma. However, an important consideration when providing Internet-based educational materials is that they need to be interpretable by a wide demographic. Online glaucoma materials are commonly written at the 10th- to 11th-grade reading level, which is markedly higher than the American Medical Association's recommendation to compose patient education materials at or below the seventh-grade reading level. This becomes more relevant given that patients who use outside educational sources or feel empowered to play an active role in their own care have better adherence to medications. As a result, making glaucoma education not only accessible online but also easier to understand can encourage better understanding and compliance with glaucoma treatments. For individuals who do not have readily available access to Internet-based educational modalities, free health education in the local community is also a viable option. When asked about where glaucoma educational programs should be offered, nearly 40% of respondents voted for programs at public community centers, senior citizen centers, or on television, and nearly 30% requested to have programs at local churches. Collectively, implementing both online and offline educational programs either virtually or in person within accessible local communities may reach a much broader audience. When discussing these solutions to alleviate disparities, it is important to acknowledge the financial costs and logistical aspects that are involved in implementing these programs. The cost of implementation is not a negligible amount; for example, the MI-SIGHT program costs over $100,000 per clinic for the facility, provider and technicians, and diagnostic equipment. Although these costs may seem high in isolation, these programs are actually cost effective when considered in the context of cost per case and patient volume. By strategically targeting higher volume, under-resourced communities, these programs can deliver care to those who face the highest levels of poverty and greatest need for care. At the same time, glaucoma screening must be conducted in a deliberate and appropriate manner. The USPSTF states that the benefit of glaucoma screening is not clear ; therefore, screening should remain mindful of costs and the financial burden associated with false-positive tests, especially in resource-limited safety-net healthcare settings that are unable to sustain non-productive financial burdens. Artificial intelligence (AI), a technology that simulates intelligent behavior and analyzes data, can be utilized within the medical field to diagnose or manage diseases efficiently and can potentially address disparities in access to care and health outcomes. , Ophthalmology is a medical specialty that relies on manual evaluation of image-based data to guide patient care; therefore, the potential for AI implementation within ophthalmology is high. In fact, numerous studies have demonstrated the utility of AI in diagnosing, detecting, and monitoring the progression of ocular disease, including glaucoma and diabetic retinopathy, based on fundus and optic disc images. – With this context, it is evident that AI offers a tremendous opportunity to improve glaucoma care. When this technology is eventually integrated into standard care, such as diagnostic devices and electronic medical record systems, it will be possible to more efficiently replicate time- and cost-intensive tasks for which we currently rely on human providers. Overall, there is a large potential for this technology to bridge economic gaps and reach a wider audience. , The virtual nature of telemedicine also opens additional avenues to address eyecare access disparities using AI. For example, predictive AI models can evaluate VF tests to diagnose glaucoma with increased sensitivity and equivalent specificity compared with physicians and could ultimately be used to support or even automate the glaucoma decision-making process. Despite its immense promise, implementation of AI in clinical practice faces several challenges. First, AI models are trained on datasets derived from specific patient populations that may differ from those of a provider’s end-users; overgeneralizing AI models can produce biased recommendations and conclusions that perpetuate or even exacerbate healthcare disparities. Second, AI models are also not immune to errors in interpreting data and providing accurate recommendations ; therefore, providers should be mindful and continue to exercise sound clinical judgment when adopting novel AI tools. Third, each healthcare system differs in its capacity and resources to accommodate newly detected glaucoma patients; therefore, model sensitivity should be balanced by the need to ensure timely care for more urgent patients. Finally, issues beyond the AI program itself, such as physician and patient technology literacy, legal liability, and overall public confidence in AI, will all have to be thoroughly addressed. , , Although AI has great potential to revolutionize glaucoma care, there are many technical and ethical questions that must first be addressed. The current literature has identified consistent disparities in glaucoma care that are evident across race, age, and gender. Although underlying genetic factors may determine intrinsic disease risk, the social context of these factors strongly influences glaucoma detection and management, thereby impacting patient outcomes. These factors are also closely interwoven with SES, insurance product, and geographic region, ultimately shaping the disease course and trajectory. It is necessary to gain a holistic understanding of existing disparities in order to delineate solutions to bridge existing gaps in glaucoma care. Proposed interventions, including teleglaucoma and community-based screening initiatives, are focused on increasing access to eye care to facilitate glaucoma detection and improve health education. In the future, AI may provide an affordable and accessible solution to glaucoma detection and monitoring, but the technology also carries an inherent risk of bias. Ultimately, all proposed solutions to address glaucoma disparities will benefit from promoting patient-centered care and fostering the patient–physician relationship. We hope improved awareness about glaucoma disparities will motivate healthcare providers, policymakers, and other stakeholders to work collaboratively on reducing disease burden and improving clinical outcomes in vulnerable patient populations. |
Trends in female applicants to Canadian ophthalmology residency programs from 1998-2020 | ea063d76-2f65-4419-8a11-635641ca5759 | 11725010 | Ophthalmology[mh] | Historically, medicine has been a male-dominated profession. In 1959, only approximately 6% of Canadian medical students were female. By 1989, approximately 44% of medical students were female. During the mid 1990s and early 2000s, the proportion of female and male medical students equalized and has subsequently skewed towards a higher number of females ever since. While this statistic is recognized and celebrated in both the medical and non-medical communities, more studies are needed to identify disparities in gender representation within specific medical specialties. This information may inform possible steps for achieving gender equality in various medical specialties, which may in turn increase access of care and health outcomes for female patients. Despite the increasing number of female students entering medical school over the past few decades, surgical specialties, on average, attract fewer female applicants. This gender disparity is particularly evident within the field of ophthalmology, where each year between 2010-2020, just 33.3-46.5% of Canadian Resident Matching Service (CaRMS) applicants ranking ophthalmology as their first choice were female. In other surgical fields where historical gender disparities exist, such as cardiac surgery or otolaryngology-head and neck surgery, recent studies have been published examining and discussing applicant trends based on gender. , A study by Lorello et al. in 2020 evaluated trends in female CaRMS applicants to all specialties; however, a study specific to the trends of female students applying to ophthalmology has not been published since 2006. More specifically, recent comparisons between trends seen in ophthalmology and other surgical specialties, statistical examination of how those trends have developed and changed over time, examination of matching success rates by gender in ophthalmology, and a review of the proportion of practicing female ophthalmologists were identified as gaps in the literature. As such, a more recent review of the data is warranted to contribute to the growing knowledge base on gender representation in medicine, and in ophthalmology specifically. These data may allow for future research into reasons behind these trends, as well as prompt discussion and change regarding any potential gender biases seen. To address these gaps in the literature, we conducted a retrospective analysis. The primary aim was to examine female representation in ophthalmology applicants and those successfully matching to the specialty from 1998 to 2020. The secondary aims of this study were 1) to compare trends in female representation to other surgical specialties, 2) to compare the success rate of female applicants to the success rate of male applicants in ophthalmology residency programs, and 3) to examine trends in practicing female ophthalmologists. Study design This was a retrospective cross-sectional analysis of gender-stratified match data results for surgical specialties from the publicly available CaRMS database between 1998-2020. We obtained data for Canadian medical graduate (CMG) applicants in the first iteration of the match. We then carried out a second analysis on publicly available data from the Canadian Medical Association (CMA) Physician Census from 2000 to 2019 regarding trends in practicing female ophthalmologists. In addition, we examined data from the CMA regarding practicing female surgeons in other surgical specialties. The local research ethics board waived the need for ethics approval as this study analyzed publicly available data. Sampling methods We collected aggregate data for the annual number of male and female applicants ranking ophthalmology as their first-choice specialty, as well as those that subsequently matched, for each year. Gender was self-reported on the CaRMS application with the only available options of “male” and “female.” We then obtained cumulative data of the total number of females among all CaRMS applicants, as well as the number of females applying and matching to any surgical program, for each year. The surgical disciplines grouped together included cardiac surgery, general surgery, neurosurgery, ophthalmology, orthopedic surgery, otolaryngology, plastic surgery, urology and vascular surgery. These programs were included as they are listed under surgical specialties on the CaRMS website dating back to 1998. We extracted additional data from the CMA Physician Census from 2000 to 2019. This data included the number and percentage of male and female ophthalmologists currently practicing in Canada. We also obtained these same data points for other surgical subspecialties. Statistical analysis To account for the increase in the total number of medical students and the increase in size of residency training programs over time, we used the proportion of females to analyze trends instead of absolute number of females. We first examined the proportion of females-to-males ranking ophthalmology as their first-choice specialty, as well as the proportion of females-to-males matching to ophthalmology as their first-choice specialty. We applied the same method for the analysis of female applicants in other surgical specialties. To compare the success rate of female to male applicants in ophthalmology residency programs, we calculated the success rate for each gender for each year. This represents the proportion of successfully matched applicants out of all the applicants that ranked ophthalmology as their first choice. We also used proportions when comparing female and male practicing ophthalmologists and other surgical specialists. We used a fractional regression model with logit link, a type of regression model used when the dependent variable has upper and lower limits (such as with proportions), to analyze the proportion of female ophthalmology applicants over the study period. This type of analysis shows whether the change in a dependent variable (i.e., proportion) is correlated with the change in another variable (i.e., time) in a statistically significant way. We carried out the analysis of trend first by treating time as a continuous variable for each year, and then by treating it as a categorical variable across distinct 4- or 5-year time periods (1998‒2002, 2003‒2007, 2008‒2012, 2013‒2016, and 2017‒2020). We used a categorical variable for surgical discipline in the fractional logit model to evaluate other surgical specialties’ trends over time, and an interaction term (which essentially describes how similar one trend is to another) to assess if the change in trend over time was different than the reference category of ophthalmology. We performed a Bonferroni correction when multiple comparisons were made. We analyzed and compared the change in trend of the matching success rate for each gender over the study period using a fractional logistic regression with an interaction term to assess for effect modification. We conducted further sub-analysis at the yearly level using a Fisher's exact test to determine if being male or female was associated with an increased rate of matching in that particular year. We compiled descriptive statistics and graphical trends over time for each residency training program across all years, as well as for practicing female physicians across different surgical disciplines. We performed all data analyses using Stata software (version 15.1; Stata Corp, College Station, Texas). This was a retrospective cross-sectional analysis of gender-stratified match data results for surgical specialties from the publicly available CaRMS database between 1998-2020. We obtained data for Canadian medical graduate (CMG) applicants in the first iteration of the match. We then carried out a second analysis on publicly available data from the Canadian Medical Association (CMA) Physician Census from 2000 to 2019 regarding trends in practicing female ophthalmologists. In addition, we examined data from the CMA regarding practicing female surgeons in other surgical specialties. The local research ethics board waived the need for ethics approval as this study analyzed publicly available data. We collected aggregate data for the annual number of male and female applicants ranking ophthalmology as their first-choice specialty, as well as those that subsequently matched, for each year. Gender was self-reported on the CaRMS application with the only available options of “male” and “female.” We then obtained cumulative data of the total number of females among all CaRMS applicants, as well as the number of females applying and matching to any surgical program, for each year. The surgical disciplines grouped together included cardiac surgery, general surgery, neurosurgery, ophthalmology, orthopedic surgery, otolaryngology, plastic surgery, urology and vascular surgery. These programs were included as they are listed under surgical specialties on the CaRMS website dating back to 1998. We extracted additional data from the CMA Physician Census from 2000 to 2019. This data included the number and percentage of male and female ophthalmologists currently practicing in Canada. We also obtained these same data points for other surgical subspecialties. To account for the increase in the total number of medical students and the increase in size of residency training programs over time, we used the proportion of females to analyze trends instead of absolute number of females. We first examined the proportion of females-to-males ranking ophthalmology as their first-choice specialty, as well as the proportion of females-to-males matching to ophthalmology as their first-choice specialty. We applied the same method for the analysis of female applicants in other surgical specialties. To compare the success rate of female to male applicants in ophthalmology residency programs, we calculated the success rate for each gender for each year. This represents the proportion of successfully matched applicants out of all the applicants that ranked ophthalmology as their first choice. We also used proportions when comparing female and male practicing ophthalmologists and other surgical specialists. We used a fractional regression model with logit link, a type of regression model used when the dependent variable has upper and lower limits (such as with proportions), to analyze the proportion of female ophthalmology applicants over the study period. This type of analysis shows whether the change in a dependent variable (i.e., proportion) is correlated with the change in another variable (i.e., time) in a statistically significant way. We carried out the analysis of trend first by treating time as a continuous variable for each year, and then by treating it as a categorical variable across distinct 4- or 5-year time periods (1998‒2002, 2003‒2007, 2008‒2012, 2013‒2016, and 2017‒2020). We used a categorical variable for surgical discipline in the fractional logit model to evaluate other surgical specialties’ trends over time, and an interaction term (which essentially describes how similar one trend is to another) to assess if the change in trend over time was different than the reference category of ophthalmology. We performed a Bonferroni correction when multiple comparisons were made. We analyzed and compared the change in trend of the matching success rate for each gender over the study period using a fractional logistic regression with an interaction term to assess for effect modification. We conducted further sub-analysis at the yearly level using a Fisher's exact test to determine if being male or female was associated with an increased rate of matching in that particular year. We compiled descriptive statistics and graphical trends over time for each residency training program across all years, as well as for practicing female physicians across different surgical disciplines. We performed all data analyses using Stata software (version 15.1; Stata Corp, College Station, Texas). Gender representation in ophthalmology applicants The proportion of female medical students ranking and matching to ophthalmology as their first choice in the annual CaRMS match have both shown an overall increase. From 1998 to 2020, the proportion of female applicants compared to male applicants ranking ophthalmology rose from 24.3% ( n = 37) to 33.3% ( n = 75) ( p = 0.001) and the proportion of female applicants matching to ophthalmology rose from 28.6% ( n = 14) to 40.5% ( n = 37) ( p = 0.023) . Similarly, when comparing the change in proportion using the 4- or 5-year time periods, female applicants ranking ophthalmology increased from 23.5% ( n = 166) between 1998-2002 to 37.9% ( n = 253) between 2017-2020 (p<0.001) and female applicants matching to ophthalmology increased from 21.3% ( n = 75) between 1998-2002 to 35.9% ( n = 145) between 2017-2020 ( p = 0.006) . However, comparing incremental change in proportion throughout the intermediate 4- to 5-year time periods did not reveal the same trend, and most of the increase from 1998 to 2020 actually occurred in the early 2000s. Lastly, the proportion of females matching to ophthalmology statistically significantly decreased in 2017-2020 to 35.9% ( n = 145) compared to 44.4% ( n = 151) in the previous time period ( p = 0.002). Gender representation compared to other surgical disciplines The proportion of female applicants ranking and the proportion of female applicants matching to each surgical subspecialty as their first choice has increased significantly from 1998 to 2020 ( p < 0.05 for each specialty individually). The change in this increasing proportion over time for both ranking and matching was not significantly different when each program was compared to ophthalmology, except for cardiac surgery and otolaryngology. For these two specialties, the rate of change in the proportion of female applicants ranking it as first choice was greater than the rate of change of females ranking ophthalmology ( p = 0.041 and p = 0.016 for cardiac surgery and otolaryngology, respectively, when compared to ophthalmology). This change in tendency was not seen in the proportion of females matching to cardiac surgery and otolaryngology when compared to ophthalmology. We also ran the same analyses from 2003-2020, as it may appear that the rate of change of females ranking and matching to ophthalmology decreased compared to other surgical specialties. In terms of females ranking surgical specialties, only cardiac surgery had a significantly higher rate of change ( p = 0.044). However, general surgery, orthopedic surgery, and otolaryngology had significantly higher rates of change compared to ophthalmology when it came to females matching ( p = 0.008, p = 0.010, and p = 0.021, respectively). We analyzed the overall proportion of females applying and matching to any surgical program over the 4- or 5-year time periods . The proportion of female applicants ranking any surgical discipline increased from 22.8% ( n = 1,114) between 1998-2002 to 44.0% ( n = 1,619) between 2017-2020 ( p < 0.001) and the female applicants matching to any surgical discipline increased from 21.8% ( n = 749) between 1998-2002 to 42.5% ( n = 1,062) between 2017-2020 ( p < 0.001). However, this change in proportion was not significant when comparing the time periods 2017-2020 to 2013-2016, and 2013-2016 to 2008-2012 ( p > 0.05 for each). Comparison of the success rate between the genders The acceptance rate of female applicants that ranked ophthalmology as their first choice and successfully matched did not significantly change throughout the years studied ( p = 0.120). The matching success rates of female and male applicants throughout the years studied were also not significantly different ( p = 0.45). Between 1998 and 2020, the average success rate of female applicants was 61.0% and the average success rate of male applicants was 58.0%. When looking at each year individually, the likelihood of successfully matching among those that ranked ophthalmology as their first choice did not depend on gender . Gender representation in practicing physicians The proportion of female practicing ophthalmologists out of total number of practicing ophthalmologists has significantly increased over the past two decades from 16.3% ( n = 1,072) in 2000 to 28.3% ( n = 1,246) in 2019. We also observed an increase in female representation in the trends of female practicing physicians found in other surgical specialties, but this pattern was highest in ophthalmology compared to the other surgical subspecialties measured in this study. The proportion of female medical students ranking and matching to ophthalmology as their first choice in the annual CaRMS match have both shown an overall increase. From 1998 to 2020, the proportion of female applicants compared to male applicants ranking ophthalmology rose from 24.3% ( n = 37) to 33.3% ( n = 75) ( p = 0.001) and the proportion of female applicants matching to ophthalmology rose from 28.6% ( n = 14) to 40.5% ( n = 37) ( p = 0.023) . Similarly, when comparing the change in proportion using the 4- or 5-year time periods, female applicants ranking ophthalmology increased from 23.5% ( n = 166) between 1998-2002 to 37.9% ( n = 253) between 2017-2020 (p<0.001) and female applicants matching to ophthalmology increased from 21.3% ( n = 75) between 1998-2002 to 35.9% ( n = 145) between 2017-2020 ( p = 0.006) . However, comparing incremental change in proportion throughout the intermediate 4- to 5-year time periods did not reveal the same trend, and most of the increase from 1998 to 2020 actually occurred in the early 2000s. Lastly, the proportion of females matching to ophthalmology statistically significantly decreased in 2017-2020 to 35.9% ( n = 145) compared to 44.4% ( n = 151) in the previous time period ( p = 0.002). The proportion of female applicants ranking and the proportion of female applicants matching to each surgical subspecialty as their first choice has increased significantly from 1998 to 2020 ( p < 0.05 for each specialty individually). The change in this increasing proportion over time for both ranking and matching was not significantly different when each program was compared to ophthalmology, except for cardiac surgery and otolaryngology. For these two specialties, the rate of change in the proportion of female applicants ranking it as first choice was greater than the rate of change of females ranking ophthalmology ( p = 0.041 and p = 0.016 for cardiac surgery and otolaryngology, respectively, when compared to ophthalmology). This change in tendency was not seen in the proportion of females matching to cardiac surgery and otolaryngology when compared to ophthalmology. We also ran the same analyses from 2003-2020, as it may appear that the rate of change of females ranking and matching to ophthalmology decreased compared to other surgical specialties. In terms of females ranking surgical specialties, only cardiac surgery had a significantly higher rate of change ( p = 0.044). However, general surgery, orthopedic surgery, and otolaryngology had significantly higher rates of change compared to ophthalmology when it came to females matching ( p = 0.008, p = 0.010, and p = 0.021, respectively). We analyzed the overall proportion of females applying and matching to any surgical program over the 4- or 5-year time periods . The proportion of female applicants ranking any surgical discipline increased from 22.8% ( n = 1,114) between 1998-2002 to 44.0% ( n = 1,619) between 2017-2020 ( p < 0.001) and the female applicants matching to any surgical discipline increased from 21.8% ( n = 749) between 1998-2002 to 42.5% ( n = 1,062) between 2017-2020 ( p < 0.001). However, this change in proportion was not significant when comparing the time periods 2017-2020 to 2013-2016, and 2013-2016 to 2008-2012 ( p > 0.05 for each). The acceptance rate of female applicants that ranked ophthalmology as their first choice and successfully matched did not significantly change throughout the years studied ( p = 0.120). The matching success rates of female and male applicants throughout the years studied were also not significantly different ( p = 0.45). Between 1998 and 2020, the average success rate of female applicants was 61.0% and the average success rate of male applicants was 58.0%. When looking at each year individually, the likelihood of successfully matching among those that ranked ophthalmology as their first choice did not depend on gender . The proportion of female practicing ophthalmologists out of total number of practicing ophthalmologists has significantly increased over the past two decades from 16.3% ( n = 1,072) in 2000 to 28.3% ( n = 1,246) in 2019. We also observed an increase in female representation in the trends of female practicing physicians found in other surgical specialties, but this pattern was highest in ophthalmology compared to the other surgical subspecialties measured in this study. Our study demonstrates that there was a statistically significant increase in the proportion of total female applicants to ophthalmology, and in the proportion of female applicants accepted from 1998 to 2020. The proportion of female applicants increased from 24.3% in 1998 to as high as 46.5% in 2018, and matched female applicants increased from 28.6% in 1998 to as high as 52.8% in 2011 . However, this change occurred early on as the incremental change between 1998-2002 and 2003-2007 was statistically significant , but between each subsequent 4- to 5-year period, there was no significant increase. In fact, in the 2017-2020 period there was a significant decrease in proportion of matched females compared to the previous period. Based on , we can see that the increase primarily occurred from 2002-2004, and a relative plateau begins afterward. The increase in female applicants and matches within ophthalmology in the early 2000s may be partially accounted for by the overall increase in female medical students in Canada. The stagnation seen in the following years could be due to a variety of factors that discourage women from pursuing a career in ophthalmology; however, further investigation is needed to elicit what these factors could be. Further investigation into the decrease in proportion of females matching to ophthalmology is also needed. The percentage of total female applicants to CaRMS increased from 45.4% in 1998 to 56.1% in 2020. Although the number of females exceeds males in Canadian medical schools, there is a persistent gap in the female-to-male ratio in ophthalmology ranking and matching. Between 2017 and 2020, the average proportion of females applying and matching to ophthalmology was still only 37.9% and 35.9%, respectively . In comparison to other surgical specialties, we found that the rates of change for females ranking and matching were statistically significantly higher than ophthalmology in certain specialties (e.g., cardiac surgery for ranking, general surgery for matching), but not significantly different in most others. The reasons for these disparities are still unclear, but it appears as though the overall trends in females ranking and matching are mostly similar across surgical specialties. This raises the question of why this could be occurring; due to the higher number of females than males in medical schools, it may seem that a lesser percentage of females are applying to ophthalmology and other surgical specialties compared to males. For the instances that there are differences, further monitoring, and research as to why these differences are present and what institutions can do to change that may be warranted so that potential strategies to equalize gender representation can be well-informed. In 2006, Baerlocher & Noble concluded that there was no discrimination against female ophthalmology CaRMS applicants based on gender. Other studies on potential gender-based favouritism in for specific surgical specialties have found similar results. , However, a 2020 study by Ruzycki et al. found that females were less likely to match to a first-choice surgical subspecialty than males, so results are variable. According to our study, a comparison of the success rates between males and females in ophthalmology, both overall and for each individual year, did not reveal a statistically significant difference . The success rate of female applicants has also not statistically changed from 1998-2020, suggesting that any increases seen in female ophthalmology resident numbers is not based on a change in their ability to match, but rather an increase in the proportion applying. Again, this finding may suggest that the difference in proportion between males and females is due to a lack of female applicants; why this is still occurring may be an important topic for further research. Many theories have been put forth regarding why rates of female applications are lower in surgical specialties, including ophthalmology. Lack of mentorship in male-dominated specialties and societal expectations or personal family goals are commonly cited. , , , Our study shows that there are fewer female practicing ophthalmologists than males to act as mentors. No specific data or studies could be identified on rates of female leadership in ophthalmology in Canada, but statistics from the American Academy of Ophthalmology in the United States show that female ophthalmologists continue to be underrepresented, often comprising approximately 30% of leadership positions. However, a study published by Kletke et al. in 2020 noted that most recently practicing (finished residency within the last 20 years) female ophthalmologists in Canada felt like they had adequate female mentors within the specialty. This would suggest that a lack of female mentorship in ophthalmology may not play a large role in the gender disparity of applicants, and that there are probably other factors affecting these rates. Several personal factors may also influence specialty choice; lifestyle implications of surgical specialties is one such factor. A recent systematic review by Trinh et al. in 2021 showed that female medical students across the globe were statistically significantly more influenced by lifestyle factors such as maternity leave and possibility of part-time work than their male counterparts when considering a career in surgery; however, ophthalmology was not included in this review and thus results may not apply. Another systematic review corroborated these findings by demonstrating that specialties with a higher ratio of females to males were linked with a better work-life balance. However, ophthalmology is broadly seen as a lifestyle-positive specialty, and as such the aforementioned lifestyle factors may not apply to ophthalmology as much as other surgical specialties. As such, more investigation is needed to pinpoint exact causes and to explain why some surgical specialties have seen increases in female first choice ranking/matching in the years since 2003-2007 that ophthalmology has not. In terms of trends observed in the proportion of practicing female ophthalmologists, the proportion has increased from 16.3% in 2000 to 28.3% in 2019 . The proportion of practicing female ophthalmologists in 2019 does not reflect the number of females accepted into and graduating from residency programs since 1998 since the proportion of practicing female ophthalmologists is much lower than the proportion applying and matching into the specialty. This may be a result of several factors, such as part-time employment, maternity leave situations, or a historical male dominance in ophthalmology. More specifically, if ophthalmologists practice until the age of retirement, the population of ophthalmologists that began practicing in the mid to late 20 th century will have much more male representation than the population of ophthalmologists that began practicing recently based on historical match data. As such, the imbalance of genders practicing in the profession will likely take multiple generations to equalize, but the increase in proportion from 2000-2019 does suggest that this equalization process is underway. In addition, it is important to note that ophthalmology does hold a higher female representation in practicing physicians compared to other surgical specialties, although the rate of increase in proportion is not significantly different compared to other specialties. Potential reasons for this could be examined in future research. The main limitations of this study arise from the nature of the data available from the CaRMS database, and the fact that this is a retrospective study. The specialty-specific data on the number of students applying and matching was only reported for those that applied to said specialty as their first choice. Data on applicants who applied to or matched to a surgical specialty as their second choice or lower are unknown and thus we are potentially missing the totality of applicants. However, given that the success rate is not significantly different between males and females, it is unlikely that this would create any bias in the analysis of proportions. Finally, CaRMS data up to 2020 only provided options of “male” and “female” for students to select, and therefore may be unable to adequately represent all medical students applying for residency positions. Overall, there is a positive pattern of females applying to and being accepted into an ophthalmology residency program in Canada. However, there is an obvious plateauing of the numbers over the years since 2003-2004. Despite the increased proportion of female-to-male medical students, there still exists a modest but definite disparity in candidates applying to ophthalmology, which then translates to a lower proportion of females matching to the specialty. Further studies are needed to identify residual gender disparities and the factors that encourage or deter females from applying to ophthalmology and other surgical specialties so that we may address these. |
Recomendaciones para el empleo de técnicas neurofisiológicas en el diagnóstico de muerte encefálica de la Sociedad de Neurofisiología Clínica de las Comunidades de Valencia y Murcia | cdfa54bf-f607-4b2d-9759-8d982cedd376 | 10281634 | Physiology[mh] | La presente guía tiene como objetivo unificar los criterios neurofisiológicos para la realización de los estudios electroencefalográficos en el diagnóstico de muerte encefálica, a partir de la bibliografía disponible actualmente. El diagnóstico de muerte encefálica está regulado a través del Real decreto 1723/2012, de 28 de diciembre : ‘La muerte del individuo podrá certificarse tras la confirmación del cese irreversible de las funciones circulatoria y respiratoria o del cese irreversible de las funciones encefálicas’. Los principios más importantes frente a un diagnóstico de muerte encefálica son: establecer la razón del coma, excluir todos los factores de confusión, determinar la utilidad y el beneficio de la intervención o preparar al paciente para las pruebas complementarias (para su optimización), y partir desde la realización de un correcto examen clínico . El diagnóstico electroencefalográfico confirmatorio de muerte encefálica forma parte de las pruebas instrumentales consignadas en la legislación española vigente . El electroencefalograma (EEG) es un estudio accesible que tiene una especificidad del 90% en el diagnóstico de muerte encefálica. En muchos países, entre ellos España, la realización del EEG es recomendada, pero no obligatoria, en la mayoría de las situaciones . Inicialmente, para definir la ausencia de actividad cerebral, se utilizaban los términos de silencio electrocerebral o trazado isoeléctrico, que fueron reemplazados en la década de los setenta por el término de inactividad electrocerebral , que se define como la ausencia de actividad cerebral en el registro EEG a una amplitud de 2 µV con ausencia de reactividad frente a estímulos exteroceptivos y nociceptivos . Aunque el estudio electroencefalográfico no es un test instrumental legalmente requerido en España para todos los casos de diagnóstico de muerte encefálica, sí está indicado en causas infratentoriales, ya que las funciones corticales no pueden ser evaluadas clínicamente en pacientes con graves destrozos del macizo craneofacial o cualquier circunstancia que impida la exploración de los reflejos de tronco, intolerancia al test de apnea, niños menores de 1 año, y en ausencia de lesión destructiva cerebral demostrable por evidencia clínica o neuroimagen . Además, es la técnica en la que más experiencia se tiene en muerte encefálica desde la década de los setenta, mientras que las otras técnicas son de implantación más reciente. Según la Organización Nacional de Trasplantes de España, en 2020 se realizaron 122.341 trasplantes de órganos en los 82 países participantes. En la Unión Europea se registraron 9.447 donantes, de los cuales España aportó el 19%, lo que supone el 5% de los registrados en todo el mundo . Se define muerte encefálica como ‘el cese irreversible en las funciones de todas las estructuras neurológicas intracraneales tanto de los hemisferios cerebrales como del troncoencéfalo’ . El Real Decreto 1723/2012, de 28 de diciembre, indica los criterios neurológicos que han de cumplirse para realizar el diagnóstico de muerte encefálica , e indica, de forma textual: Condiciones diagnósticas: coma de etiología conocida y de carácter irreversible (debe haber evidencia clínica o neuroimagen de lesión destructiva del sistema nervioso central). Exploración clínica neurológica. 2.1. Debe ser sistemática, completa y rigurosa en la que previamente deben cumplirse las siguientes condiciones: • Estabilidad hemodinámica. • Oxigenación y ventilación adecuadas. • Temperatura corporal > 32 °C en adultos y > 35 °C en niños de hasta 24 meses de edad. • Ausencia de alteraciones metabólicas y endocrinológicas que pudieran ser causantes del coma. • Ausencia de sustancias o fármacos depresores del sistema nervioso central que pudieran ser causantes del coma. • Ausencia de bloqueantes neuromusculares. 2.2. Los hallazgos fundamentales en la exploración neurológica son: coma arreactivo (sin respuesta motora o vegetativa al estímulo algésico realizado en territorio de los nervios craneales, ni posturas de descerebración ni decorticación), ausencia de los reflejos troncoencefálicos (fotomotor, corneal, oculocefálicos, oculovestibulares, nauseoso y tusígeno). Ausencia de respuesta al test de atropina (explora farmacológicamente el nervio vago y sus núcleos troncoencefálicos administrando 0,04 mg/kg atropina intravenosa, y comprobando la frecuencia cardíaca antes y después de la intervención; ésta no debe superar el 10% de la frecuencia cardíaca basal). Apnea demostrada mediante el test de Apnea (comprobando que no existen movimientos respiratorios torácicos ni abdominales cuando la pCO 2 en la sangre arterial sea superior a 60 mmHg). 2.3. La presencia de actividad motora de origen espinal espontánea o inducida no invalida el diagnóstico de muerte encefálica. Condiciones que dificultan el diagnóstico clínico de muerte encefálica con limitación de la exploración neurológica: Pacientes con graves fracturas del macizo craneofacial o circunstancias que impidan la exploración de los reflejos craneoencefálicos. Intolerancia al test de apnea. Hipotermia (temperatura inferior o igual a 32 °C). Intoxicación o tratamiento previo con dosis elevadas de fármacos o sustancias depresoras del sistema nervioso central. Períodos de observación: Seis horas en lesión destructiva conocida. 24 horas en caso de encefalopatía anóxica. Si existe uso de fármacos depresores del SNC, el período de observación debe ajustarse a criterio médico según la vida media de los fármacos, y las condiciones clínicas y biológicas del paciente. Pruebas instrumentales de soporte diagnóstico: Son obligatorias las indicadas en el apartado 2.4 y en los siguientes supuestos: – Ausencia de lesión destructiva cerebral demostrable por evidencia clínica o por neuroimagen. – Cuando la lesión es primariamente infratentorial (específicamente EEG o prueba de flujo cerebral). Las pruebas neurofisiológicas que evalúan la función neuronal son el EEG y los potenciales evocados. Diagnóstico de muerte encefálica: No complicado: coma de causa conocida. No existen condiciones que dificulten el diagnóstico. Exploración clínica compatible y una prueba instrumental de soporte diagnóstico concluyente. Puede ser diagnosticado de muerte encefálica sin ser preciso esperar el período de observación correspondiente. En situaciones especiales: cuando existan las circunstancias expuestas en el punto 2.4, no exista lesión destructiva cerebral demostrable clínica o radiológicamente y exista una lesión infratentorial: al menos una prueba diagnóstica confirmatoria. Recién nacidos, lactantes y niños : Criterios descritos como en adultos. La exploración neurológica en neonatos y lactantes pequeños debe incluir reflejos de succión y búsqueda. En neonatos, la exploración debe repetirse en varias ocasiones, previa comprobación de la temperatura corporal. Períodos de observación : – Neonatos pretérmino: período de observación de 48 horas. – Neonatos (semana 37 de gestación hasta los 30 días de edad): período de observación de 24 horas. – Niños de más de 30 días hasta 24 meses de edad: período de observación de 12 horas. – A partir de los 2 años es similar a los adultos: período de observación de seis horas en una lesión estructural y 24 horas en una lesión anóxica. En todos los casos puede acortarse de acuerdo con las pruebas diagnósticas realizadas y se puede omitir si se realiza una prueba diagnóstica que objetive ausencia de flujo sanguíneo cerebral. El grupo de trabajo se ha constituido por 14 neurofisiólogos clínicos pertenecientes a las Comunidades de Valencia y Murcia. Se realizó un cuestionario inicial partiendo del método Delphi, posteriormente se realizó una exhaustiva revisión bibliográfica y se consensuaron cada uno de los puntos de estas recomendaciones. Recomendaciones de la realización del estudio electroencefalográfico para el diagnóstico de muerte encefálica Consideraciones mínimas Los estudios deben realizarse por facultativos especialistas que tengan formación y, ante todo, experiencia en la correcta interpretación de los registros electroencefalográficos . Es preciso, antes de realizar el estudio, registrar correctamente los datos de identificación y filiación del paciente, la causa del coma, los resultados de la neuroimagen y los parámetros hemodinámicos. Se debe realizar una exploración neurológica completa que sea compatible con la muerte encefálica, como lo describe el Real Decreto 1723/2012 . Previamente, se ha de corroborar que la temperatura corporal esté dentro del rango indicado para la misma norma, así como el resto de los parámetros hemodinámicos. Es necesario valorar todos los fármacos administrados al paciente (especialmente barbitúricos, benzodiacepinas, sobre todo el uso de propofol) en las últimas 24 horas (60 horas en caso de barbitúricos ), así como posibles alteraciones metabólicas, hepáticas o renales que puedan interferir en los tiempos de vida media de éstos. Se deben revisar las analíticas más recientes (véase apartado de medicación). Los tiempos de observación desde el evento causante de la muerte encefálica y entre exploraciones (si fueran necesarias), tanto en niños como adultos, deben respetarse como lo indica la legislación. Consideraciones técnicas Se recomienda un montaje basado en el sistema internacional 10-20 reducido, mínimo de ocho electrodos, que pueden ser de cucharilla de cloruro de plata con pasta conductora o de aguja: FP1-FP2-C3-CZ-C4-T3-T4-O1-O2, un electrodo de tierra y, adicionalmente, es preciso registrar la señal electrocardiográfica. Es recomendable el registro de la señal electromiográfica para valorar el artefacto muscular. La distancia interelectrodo en condiciones óptimas no debe ser menor de 10 cm en adultos. Si la situación lo demanda (traumatismo craneoencefálico o cirugía reciente), es posible mover mínimamente los electrodos, teniendo en cuenta que hay que respetar la distancia mínima interelectrodo (de 6 a 6,5 cm); además debe documentarse . Filtros: la resistencia de cada electrodo debe estar entre 100 y 10.000 Ω. La sensibilidad debe ser de 2 µV/mm. Los filtros de alta frecuencia (paso bajo) no deben configurarse por debajo de 30 Hz y el filtro de baja frecuencia (paso alto) no debe estar por encima de 1Hz . El filtro de red o de muesca (notch filter) a 50 Hz (60 Hz según equipo y país) debe usarse con cuidado y, si se usa, deben registrase segmentos del estudio sin él para comparar. El estudio debe obtener un mínimo de 30 minutos de registro de calidad que pueda ser correctamente interpretado libre de artefactos (con excepción del electrocardiograma). Los artefactos deben ser, en lo posible, corregidos: si existe artefacto muscular, se puede considerar el uso de un bloqueante neuromuscular, desconectar todo equipo que no sea imprescindible que pueda generar artefacto (por ejemplo, la conexión de la cama). El artefacto de electrocardiograma en las derivaciones contiguas es muy difícil de eliminar; sin embargo, pueden espaciarse sus electrodos a nivel torácico. Es necesario realizar y describir en el estudio estimulación auditiva y dolorosa. Si es necesario repetir el estudio EEG, al menos debe hacerse con un mínimo de cuatro horas entre cada estudio . Hay que registrar la hora de inicio y la hora de finalización del estudio. En cuanto a las interferencias técnicas del estudio, se encuentran las bombas de medicación, las conexiones de la cama, algunas veces la manta de calor, las botas compresivas o pasos alrededor del paciente. Los problemas técnicos que se deben valorar son: grandes traumatismos, lesiones en el scalp y registros en niños; todos ellos han de tenerse en cuenta y, en la medida de lo posible, intentar resolverse. Hipotermia Se puede deprimir la electrogenia bajo una hipotermia moderada, temperatura menor o igual a 30 °C, lo que puede incrementar las anomalías en el registro . Puede alterar el metabolismo y el aclaramiento de los fármacos. La recomendación es que la temperatura corporal sea igual o superior a 34 °C . Medicación Una serie de medicamentos y sustancias pueden confundir cuando se va a realizar el diagnóstico de muerte encefálica, ya que pueden disminuir el flujo cerebral y generar una depresión en la actividad bioeléctrica cerebral: barbitúricos, propofol, opioides, opiáceos, antiepilépticos, benzodiacepinas, fenotiacinas, antidepresivos tricíclicos, relajantes musculares, alcohol y cocaína ; sin embargo, es muy difícil determinar su rango terapéutico, niveles de toxicidad en muchos casos y, ante todo, el tiempo real de eliminación individualizado para cada paciente. Cuando no se conoce el tiempo de exposición, las guías internacionales recomiendan el uso de un test de exposición a tóxicos: Valorar si los niveles de medicación no exceden el rango terapéutico en la sangre. Si se asume que el paciente tiene una función hepática y renal normal (valorar analíticamente) y se pueden medir los niveles de medicación o aclaramiento, se debe permitir que al menos pasen cinco vidas medias del fármaco antes de realizar la valoración clínica y, en nuestro caso, electroencefalográfica. Si se conoce el fármaco, pero no se puede cuantificar, el paciente debe ser observado no menos de cuatro vidas medias del fármaco, teniendo en cuenta que su eliminación no interfiere con otros fármacos, disfunción orgánica o hipotermia. Si el fármaco en particular no se conoce, pero existe alta sospecha de que persiste su efecto, debe observarse al paciente al menos 48 horas para valorar cambios en la exploración neurológica y, si no se producen, se puede realizar el estudio. Si existe intoxicación por alcohol confirmada o sospechada, los valores de alcohol en sangre deben ser iguales o menores a 80 mg/dL. Deben corregirse las alteraciones metabólicas graves, endocrinas o ácido-base antes de plantear el estudio EEG. Para ello se ha de conocer el fármaco, su mecanismo de acción y su vida media , como se describe en la . Consideraciones especiales Algunas recomendaciones indican que, en niños prematuros y recién nacidos menores de 7 días de vida, se debería repetir la exploración neurológica cuantas veces sea necesario . En niños, las causas más frecuentes son: trauma, encefalopatía anóxica, infecciones y neoplasias cerebrales . Algunos trastornos metabólicos, como el fallo hepático, la insuficiencia renal, la hipoglucemia grave o la hiponatremia, pueden alterar el registro; por tanto, es necesario confirmarlo clínicamente y valorar si procede repetir el EEG o, en su caso, realizar una prueba de flujo. Interpretación del estudio Tras eliminar todo tipo de artefactos que puedan dar lugar a duda o error, es necesario valorar que el registro sea compatible con ausencia de actividad cerebral (sin ningún tipo de actividad cerebral a 2 µV de amplitud y sin respuesta a ningún estímulo) en todas las derivaciones valorables (especialmente, en áreas temporales que pueden presentar la última actividad electroencefalográfica residual). Existen series descritas donde esta actividad residual ha persistido hasta durante 168 horas . Informe Debe contener los datos de filiación del paciente completos: nombre, edad, fecha de nacimiento y número de tarjeta sanitaria o de historia clínica. La exploración neurológica debe estar detallada. Se debe incluir la presencia/ausencia de medicación y, en este último caso, el tiempo que ha transcurrido desde la suspensión de ésta. El informe EEG debe reunir una descripción mínima y ordenada de: número de electrodos, filtros, condiciones de registro, actividad basal, artefactos y estímulos realizados. La conclusión debe ser clara, concisa e incluir la hora de finalización del registro. Se han de incluir, en lo posible, imágenes del estudio. La hora que se certificará será la hora de finalización del estudio. Debe llevar la firma y datos del facultativo especialista. En la se resumen los conceptos más importantes desde el punto de vista neurofisiológico en la muerte encefálica. Potenciales evocados El potencial evocado se define como la respuesta generada en el tejido nervioso como respuesta a un estímulo que de forma habitual en la práctica clínica y son somatosensoriales, visuales o auditivos. Son técnicas que ayudan en el diagnóstico de muerte encefálica cuando es posible realizarlos, siempre y cuando la lesión no sea primariamente infratentorial. Si bien son test legalmente aprobados, se utilizan menos que el EEG. Son técnicas objetivas, de fácil acceso, no invasivas, que pueden realizarse a pie de cama del paciente . No se alteran tan significativamente como el EEG bajo los efectos de la medicación sedante; sin embargo, a pesar de que por sí mismos no son test confirmatorios de muerte encefálica, pueden servir de forma multimodal como test de respaldo al EEG . Para la valoración del paciente crítico describiremos sus características generales y sus limitaciones no sólo en muerte encefálica, sino también en la práctica general. Potenciales evocados somatosensoriales Se utilizan para valorar la integridad funcional de la vía somatosensorial por medio del estímulo de un nervio periférico, que genera una secuencia de potenciales en distintos puntos de la vía nerviosa y en la corteza cerebral. El más utilizado es el potencial evocado somatosensorial de nervio mediano (nervio mixto) en la muñeca. Para ello, es importante el acceso al sitio de estimulación (si no es posible la muñeca, valore estimular el codo). El estímulo consiste en una corriente de onda cuadrada con pulsos cortos (200 µs), a baja frecuencia (2-3 Hz) y con una intensidad fijada en el valor máximo (aproximadamente, 20 mA) . Se recomienda su realización con cucharilla de cloruro de plata con pasta conductora o de aguja en todos los niveles . Los parámetros de registro son: pantalla de 0,5 a 20 µV/división con un tiempo de grabación total de 50 ms (5 ms/división); filtro de paso alto de menos de 3 Hz y filtro de paso bajo de más de 2.000 Hz; y dos bloques de promediación de, al menos, 500 repeticiones . La valoración de las respuestas mínimas incluye : N9: respuesta periférica. Electrodo activo en punto de Erb ipsilateral a la estimulación. Electrodo de referencia en Fz-FPZ o punto de Erb contralateral a la estimulación. N13: cervical. Actividad postsináptica del cuerno posterior del cordón espinal. Electrodo activo en el proceso espinoso C7. Electrodo de referencia en la cervical anterior (sobre la glotis). P14: unión cervicomedular. Electrodo activo en C3-C4 según la estimulación ipsilateral. Electrodo de referencia del hombro, el lóbulo de la oreja o el punto de Erb ipsilateral a la estimulación. N20: corteza somatosensorial primaria. Electrodo activo en C3-C4 según la estimulación contralateral. Electrodo de referencia a FZ, C3-C4 ipsilateral a la estimulación. Se valoran las respuestas de tronco (P14) y corticales (potencial N20), cuya ausencia, dada la sensibilidad a la anoxia cerebral, es altamente sensible para un mal pronóstico neurológico o muerte encefálica . Como desventaja, la respuesta N20 puede estar alterada o no ser concluyente si existe de forma concomitante una lesión medular cervical y existe ausencia de N13 al realizarse de forma fraccionada , no es válido en las primeras 24 horas tras una parada cardiorrespiratoria y puede verse alterado por la hipotermia (período de cooling off ). Potenciales evocados auditivos de tronco Se obtienen por estímulos auditivos mediante clic alternante por medio de auriculares a 85 dB (nHL) de 0,1 ms de duración a 19,1 pps, que se valoran en los primeros 10 ms . Las respuestas se originan en el nervio auditivo y en las vías auditivas del tronco cerebral, desde el complejo nuclear coclear hasta el colículo inferior . Onda I: parte distal del nervio acústico o coclear. Onda II: parte proximal del nervio acústico. Onda III: unión bulboprotuberancial (también puede incluir núcleos cocleares y cuerpo trapezoide). Ondas IV-V: reflejan la propagación de la conducción del estímulo a lo largo del lemnisco lateral hasta el tubérculo cuadrigémino posterior. Para el montaje se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subdérmica en Cz y lóbulos de las orejas derecho e izquierdo (A2-A1). El electrodo activo será A2 si la estimulación es derecha y o A1 si es izquierda, y Cz de referencia . El tiempo de análisis es de 10 ms/pantalla; la sensibilidad, de 10 µV/división; y los filtros, de 100 a 3 Hz. Se recomiendan dos bloques de promediación de 1.000 respuestas . Hay que tener especial consideración en las posibles lesiones previas de la cóclea y del nervio auditivo, que pueden darse por hipoxia o mecanismos isquémicos. En lesiones traumáticas, puede existir ausencia de las respuestas por una fractura translaberíntica . Para una correcta valoración de las respuestas es necesario tener presente la onda I para darlo como presente. Para la muerte encefálica, el criterio diagnóstico es la valoración de la ausencia bilateral de las respuestas del tronco encefálico a la corteza auditiva cerebral ; sin embargo, como hemos descrito, es una técnica muy sensible, pero poco específica. Como desventajas está que puede alterarse por lesiones del VIII par craneal, lesiones del tronco cerebral . Las ondas I-III-IV pueden estar alteradas, sobre todo, por la influencia previa de medicación ototóxica. La hipotermia puede inducir un incremento en las latencias absolutas de las ondas. Potenciales evocados visuales El objetivo es valorar sobre la corteza occipital la respuesta de estímulos no estructurados mediante pantallas de diodos (leds) montados sobre gafas (goggles) . Obtenemos lo que se denomina potenciales evocados visuales flash . Exploran la vía visual desde las células ganglionares que forman el nervio óptico hasta la actividad de la corteza visual occipital . Se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subcutáneos en Fz y Oz, con electrodo de referencia o tierra en Cz, en A1 o A2 (o promedio A1-A2). El registro se realiza sobre Fz-Oz, valorando la respuesta de la segunda onda electropositiva o P2 con latencias variables de hasta 120 ms, aunque similar a la obtenida con estímulos estructurados Pattern (onda P100) . El tiempo de análisis es de 300 ms, con dos bloques de promediación de, al menos, 100 respuestas cada uno . Para la muerte encefálica, se valora la ausencia bilateral de respuestas en la corteza visual, con preservación de las respuestas en el electrorretinograma. Puede alterarse por lesiones del nervio óptico o de la retina, por la sedación profunda y, de la misma forma que los potenciales auditivos, es una técnica sensible, aunque poco específica, por lo que es necesario utilizarla de forma multimodal. Limitaciones de los estudios neurofisiológicos en el diagnóstico de muerte encefálica Como se ha comentado detenidamente, en cada técnica existen factores limitantes de las técnicas neurofisiológicas en el diagnóstico de muerte encefálica: Presencia de hipotermia. Medicación que pueda interferir en los tiempos de observación. Paciente con alteraciones metabólicas. Lesiones en el scalp o traumatismos importantes del macizo craneofacial. Limitación técnica para la desconexión de equipos que generen artefactos especialmente eléctricos. Consideraciones mínimas Los estudios deben realizarse por facultativos especialistas que tengan formación y, ante todo, experiencia en la correcta interpretación de los registros electroencefalográficos . Es preciso, antes de realizar el estudio, registrar correctamente los datos de identificación y filiación del paciente, la causa del coma, los resultados de la neuroimagen y los parámetros hemodinámicos. Se debe realizar una exploración neurológica completa que sea compatible con la muerte encefálica, como lo describe el Real Decreto 1723/2012 . Previamente, se ha de corroborar que la temperatura corporal esté dentro del rango indicado para la misma norma, así como el resto de los parámetros hemodinámicos. Es necesario valorar todos los fármacos administrados al paciente (especialmente barbitúricos, benzodiacepinas, sobre todo el uso de propofol) en las últimas 24 horas (60 horas en caso de barbitúricos ), así como posibles alteraciones metabólicas, hepáticas o renales que puedan interferir en los tiempos de vida media de éstos. Se deben revisar las analíticas más recientes (véase apartado de medicación). Los tiempos de observación desde el evento causante de la muerte encefálica y entre exploraciones (si fueran necesarias), tanto en niños como adultos, deben respetarse como lo indica la legislación. Consideraciones técnicas Se recomienda un montaje basado en el sistema internacional 10-20 reducido, mínimo de ocho electrodos, que pueden ser de cucharilla de cloruro de plata con pasta conductora o de aguja: FP1-FP2-C3-CZ-C4-T3-T4-O1-O2, un electrodo de tierra y, adicionalmente, es preciso registrar la señal electrocardiográfica. Es recomendable el registro de la señal electromiográfica para valorar el artefacto muscular. La distancia interelectrodo en condiciones óptimas no debe ser menor de 10 cm en adultos. Si la situación lo demanda (traumatismo craneoencefálico o cirugía reciente), es posible mover mínimamente los electrodos, teniendo en cuenta que hay que respetar la distancia mínima interelectrodo (de 6 a 6,5 cm); además debe documentarse . Filtros: la resistencia de cada electrodo debe estar entre 100 y 10.000 Ω. La sensibilidad debe ser de 2 µV/mm. Los filtros de alta frecuencia (paso bajo) no deben configurarse por debajo de 30 Hz y el filtro de baja frecuencia (paso alto) no debe estar por encima de 1Hz . El filtro de red o de muesca (notch filter) a 50 Hz (60 Hz según equipo y país) debe usarse con cuidado y, si se usa, deben registrase segmentos del estudio sin él para comparar. El estudio debe obtener un mínimo de 30 minutos de registro de calidad que pueda ser correctamente interpretado libre de artefactos (con excepción del electrocardiograma). Los artefactos deben ser, en lo posible, corregidos: si existe artefacto muscular, se puede considerar el uso de un bloqueante neuromuscular, desconectar todo equipo que no sea imprescindible que pueda generar artefacto (por ejemplo, la conexión de la cama). El artefacto de electrocardiograma en las derivaciones contiguas es muy difícil de eliminar; sin embargo, pueden espaciarse sus electrodos a nivel torácico. Es necesario realizar y describir en el estudio estimulación auditiva y dolorosa. Si es necesario repetir el estudio EEG, al menos debe hacerse con un mínimo de cuatro horas entre cada estudio . Hay que registrar la hora de inicio y la hora de finalización del estudio. En cuanto a las interferencias técnicas del estudio, se encuentran las bombas de medicación, las conexiones de la cama, algunas veces la manta de calor, las botas compresivas o pasos alrededor del paciente. Los problemas técnicos que se deben valorar son: grandes traumatismos, lesiones en el scalp y registros en niños; todos ellos han de tenerse en cuenta y, en la medida de lo posible, intentar resolverse. Hipotermia Se puede deprimir la electrogenia bajo una hipotermia moderada, temperatura menor o igual a 30 °C, lo que puede incrementar las anomalías en el registro . Puede alterar el metabolismo y el aclaramiento de los fármacos. La recomendación es que la temperatura corporal sea igual o superior a 34 °C . Medicación Una serie de medicamentos y sustancias pueden confundir cuando se va a realizar el diagnóstico de muerte encefálica, ya que pueden disminuir el flujo cerebral y generar una depresión en la actividad bioeléctrica cerebral: barbitúricos, propofol, opioides, opiáceos, antiepilépticos, benzodiacepinas, fenotiacinas, antidepresivos tricíclicos, relajantes musculares, alcohol y cocaína ; sin embargo, es muy difícil determinar su rango terapéutico, niveles de toxicidad en muchos casos y, ante todo, el tiempo real de eliminación individualizado para cada paciente. Cuando no se conoce el tiempo de exposición, las guías internacionales recomiendan el uso de un test de exposición a tóxicos: Valorar si los niveles de medicación no exceden el rango terapéutico en la sangre. Si se asume que el paciente tiene una función hepática y renal normal (valorar analíticamente) y se pueden medir los niveles de medicación o aclaramiento, se debe permitir que al menos pasen cinco vidas medias del fármaco antes de realizar la valoración clínica y, en nuestro caso, electroencefalográfica. Si se conoce el fármaco, pero no se puede cuantificar, el paciente debe ser observado no menos de cuatro vidas medias del fármaco, teniendo en cuenta que su eliminación no interfiere con otros fármacos, disfunción orgánica o hipotermia. Si el fármaco en particular no se conoce, pero existe alta sospecha de que persiste su efecto, debe observarse al paciente al menos 48 horas para valorar cambios en la exploración neurológica y, si no se producen, se puede realizar el estudio. Si existe intoxicación por alcohol confirmada o sospechada, los valores de alcohol en sangre deben ser iguales o menores a 80 mg/dL. Deben corregirse las alteraciones metabólicas graves, endocrinas o ácido-base antes de plantear el estudio EEG. Para ello se ha de conocer el fármaco, su mecanismo de acción y su vida media , como se describe en la . Consideraciones especiales Algunas recomendaciones indican que, en niños prematuros y recién nacidos menores de 7 días de vida, se debería repetir la exploración neurológica cuantas veces sea necesario . En niños, las causas más frecuentes son: trauma, encefalopatía anóxica, infecciones y neoplasias cerebrales . Algunos trastornos metabólicos, como el fallo hepático, la insuficiencia renal, la hipoglucemia grave o la hiponatremia, pueden alterar el registro; por tanto, es necesario confirmarlo clínicamente y valorar si procede repetir el EEG o, en su caso, realizar una prueba de flujo. Interpretación del estudio Tras eliminar todo tipo de artefactos que puedan dar lugar a duda o error, es necesario valorar que el registro sea compatible con ausencia de actividad cerebral (sin ningún tipo de actividad cerebral a 2 µV de amplitud y sin respuesta a ningún estímulo) en todas las derivaciones valorables (especialmente, en áreas temporales que pueden presentar la última actividad electroencefalográfica residual). Existen series descritas donde esta actividad residual ha persistido hasta durante 168 horas . Informe Debe contener los datos de filiación del paciente completos: nombre, edad, fecha de nacimiento y número de tarjeta sanitaria o de historia clínica. La exploración neurológica debe estar detallada. Se debe incluir la presencia/ausencia de medicación y, en este último caso, el tiempo que ha transcurrido desde la suspensión de ésta. El informe EEG debe reunir una descripción mínima y ordenada de: número de electrodos, filtros, condiciones de registro, actividad basal, artefactos y estímulos realizados. La conclusión debe ser clara, concisa e incluir la hora de finalización del registro. Se han de incluir, en lo posible, imágenes del estudio. La hora que se certificará será la hora de finalización del estudio. Debe llevar la firma y datos del facultativo especialista. En la se resumen los conceptos más importantes desde el punto de vista neurofisiológico en la muerte encefálica. ] Los estudios deben realizarse por facultativos especialistas que tengan formación y, ante todo, experiencia en la correcta interpretación de los registros electroencefalográficos . Es preciso, antes de realizar el estudio, registrar correctamente los datos de identificación y filiación del paciente, la causa del coma, los resultados de la neuroimagen y los parámetros hemodinámicos. Se debe realizar una exploración neurológica completa que sea compatible con la muerte encefálica, como lo describe el Real Decreto 1723/2012 . Previamente, se ha de corroborar que la temperatura corporal esté dentro del rango indicado para la misma norma, así como el resto de los parámetros hemodinámicos. Es necesario valorar todos los fármacos administrados al paciente (especialmente barbitúricos, benzodiacepinas, sobre todo el uso de propofol) en las últimas 24 horas (60 horas en caso de barbitúricos ), así como posibles alteraciones metabólicas, hepáticas o renales que puedan interferir en los tiempos de vida media de éstos. Se deben revisar las analíticas más recientes (véase apartado de medicación). Los tiempos de observación desde el evento causante de la muerte encefálica y entre exploraciones (si fueran necesarias), tanto en niños como adultos, deben respetarse como lo indica la legislación. , ] Se recomienda un montaje basado en el sistema internacional 10-20 reducido, mínimo de ocho electrodos, que pueden ser de cucharilla de cloruro de plata con pasta conductora o de aguja: FP1-FP2-C3-CZ-C4-T3-T4-O1-O2, un electrodo de tierra y, adicionalmente, es preciso registrar la señal electrocardiográfica. Es recomendable el registro de la señal electromiográfica para valorar el artefacto muscular. La distancia interelectrodo en condiciones óptimas no debe ser menor de 10 cm en adultos. Si la situación lo demanda (traumatismo craneoencefálico o cirugía reciente), es posible mover mínimamente los electrodos, teniendo en cuenta que hay que respetar la distancia mínima interelectrodo (de 6 a 6,5 cm); además debe documentarse . Filtros: la resistencia de cada electrodo debe estar entre 100 y 10.000 Ω. La sensibilidad debe ser de 2 µV/mm. Los filtros de alta frecuencia (paso bajo) no deben configurarse por debajo de 30 Hz y el filtro de baja frecuencia (paso alto) no debe estar por encima de 1Hz . El filtro de red o de muesca (notch filter) a 50 Hz (60 Hz según equipo y país) debe usarse con cuidado y, si se usa, deben registrase segmentos del estudio sin él para comparar. El estudio debe obtener un mínimo de 30 minutos de registro de calidad que pueda ser correctamente interpretado libre de artefactos (con excepción del electrocardiograma). Los artefactos deben ser, en lo posible, corregidos: si existe artefacto muscular, se puede considerar el uso de un bloqueante neuromuscular, desconectar todo equipo que no sea imprescindible que pueda generar artefacto (por ejemplo, la conexión de la cama). El artefacto de electrocardiograma en las derivaciones contiguas es muy difícil de eliminar; sin embargo, pueden espaciarse sus electrodos a nivel torácico. Es necesario realizar y describir en el estudio estimulación auditiva y dolorosa. Si es necesario repetir el estudio EEG, al menos debe hacerse con un mínimo de cuatro horas entre cada estudio . Hay que registrar la hora de inicio y la hora de finalización del estudio. En cuanto a las interferencias técnicas del estudio, se encuentran las bombas de medicación, las conexiones de la cama, algunas veces la manta de calor, las botas compresivas o pasos alrededor del paciente. Los problemas técnicos que se deben valorar son: grandes traumatismos, lesiones en el scalp y registros en niños; todos ellos han de tenerse en cuenta y, en la medida de lo posible, intentar resolverse. Se puede deprimir la electrogenia bajo una hipotermia moderada, temperatura menor o igual a 30 °C, lo que puede incrementar las anomalías en el registro . Puede alterar el metabolismo y el aclaramiento de los fármacos. La recomendación es que la temperatura corporal sea igual o superior a 34 °C . Una serie de medicamentos y sustancias pueden confundir cuando se va a realizar el diagnóstico de muerte encefálica, ya que pueden disminuir el flujo cerebral y generar una depresión en la actividad bioeléctrica cerebral: barbitúricos, propofol, opioides, opiáceos, antiepilépticos, benzodiacepinas, fenotiacinas, antidepresivos tricíclicos, relajantes musculares, alcohol y cocaína ; sin embargo, es muy difícil determinar su rango terapéutico, niveles de toxicidad en muchos casos y, ante todo, el tiempo real de eliminación individualizado para cada paciente. Cuando no se conoce el tiempo de exposición, las guías internacionales recomiendan el uso de un test de exposición a tóxicos: Valorar si los niveles de medicación no exceden el rango terapéutico en la sangre. Si se asume que el paciente tiene una función hepática y renal normal (valorar analíticamente) y se pueden medir los niveles de medicación o aclaramiento, se debe permitir que al menos pasen cinco vidas medias del fármaco antes de realizar la valoración clínica y, en nuestro caso, electroencefalográfica. Si se conoce el fármaco, pero no se puede cuantificar, el paciente debe ser observado no menos de cuatro vidas medias del fármaco, teniendo en cuenta que su eliminación no interfiere con otros fármacos, disfunción orgánica o hipotermia. Si el fármaco en particular no se conoce, pero existe alta sospecha de que persiste su efecto, debe observarse al paciente al menos 48 horas para valorar cambios en la exploración neurológica y, si no se producen, se puede realizar el estudio. Si existe intoxicación por alcohol confirmada o sospechada, los valores de alcohol en sangre deben ser iguales o menores a 80 mg/dL. Deben corregirse las alteraciones metabólicas graves, endocrinas o ácido-base antes de plantear el estudio EEG. Para ello se ha de conocer el fármaco, su mecanismo de acción y su vida media , como se describe en la . Algunas recomendaciones indican que, en niños prematuros y recién nacidos menores de 7 días de vida, se debería repetir la exploración neurológica cuantas veces sea necesario . En niños, las causas más frecuentes son: trauma, encefalopatía anóxica, infecciones y neoplasias cerebrales . Algunos trastornos metabólicos, como el fallo hepático, la insuficiencia renal, la hipoglucemia grave o la hiponatremia, pueden alterar el registro; por tanto, es necesario confirmarlo clínicamente y valorar si procede repetir el EEG o, en su caso, realizar una prueba de flujo. Tras eliminar todo tipo de artefactos que puedan dar lugar a duda o error, es necesario valorar que el registro sea compatible con ausencia de actividad cerebral (sin ningún tipo de actividad cerebral a 2 µV de amplitud y sin respuesta a ningún estímulo) en todas las derivaciones valorables (especialmente, en áreas temporales que pueden presentar la última actividad electroencefalográfica residual). Existen series descritas donde esta actividad residual ha persistido hasta durante 168 horas . Debe contener los datos de filiación del paciente completos: nombre, edad, fecha de nacimiento y número de tarjeta sanitaria o de historia clínica. La exploración neurológica debe estar detallada. Se debe incluir la presencia/ausencia de medicación y, en este último caso, el tiempo que ha transcurrido desde la suspensión de ésta. El informe EEG debe reunir una descripción mínima y ordenada de: número de electrodos, filtros, condiciones de registro, actividad basal, artefactos y estímulos realizados. La conclusión debe ser clara, concisa e incluir la hora de finalización del registro. Se han de incluir, en lo posible, imágenes del estudio. La hora que se certificará será la hora de finalización del estudio. Debe llevar la firma y datos del facultativo especialista. En la se resumen los conceptos más importantes desde el punto de vista neurofisiológico en la muerte encefálica. El potencial evocado se define como la respuesta generada en el tejido nervioso como respuesta a un estímulo que de forma habitual en la práctica clínica y son somatosensoriales, visuales o auditivos. Son técnicas que ayudan en el diagnóstico de muerte encefálica cuando es posible realizarlos, siempre y cuando la lesión no sea primariamente infratentorial. Si bien son test legalmente aprobados, se utilizan menos que el EEG. Son técnicas objetivas, de fácil acceso, no invasivas, que pueden realizarse a pie de cama del paciente . No se alteran tan significativamente como el EEG bajo los efectos de la medicación sedante; sin embargo, a pesar de que por sí mismos no son test confirmatorios de muerte encefálica, pueden servir de forma multimodal como test de respaldo al EEG . Para la valoración del paciente crítico describiremos sus características generales y sus limitaciones no sólo en muerte encefálica, sino también en la práctica general. Potenciales evocados somatosensoriales Se utilizan para valorar la integridad funcional de la vía somatosensorial por medio del estímulo de un nervio periférico, que genera una secuencia de potenciales en distintos puntos de la vía nerviosa y en la corteza cerebral. El más utilizado es el potencial evocado somatosensorial de nervio mediano (nervio mixto) en la muñeca. Para ello, es importante el acceso al sitio de estimulación (si no es posible la muñeca, valore estimular el codo). El estímulo consiste en una corriente de onda cuadrada con pulsos cortos (200 µs), a baja frecuencia (2-3 Hz) y con una intensidad fijada en el valor máximo (aproximadamente, 20 mA) . Se recomienda su realización con cucharilla de cloruro de plata con pasta conductora o de aguja en todos los niveles . Los parámetros de registro son: pantalla de 0,5 a 20 µV/división con un tiempo de grabación total de 50 ms (5 ms/división); filtro de paso alto de menos de 3 Hz y filtro de paso bajo de más de 2.000 Hz; y dos bloques de promediación de, al menos, 500 repeticiones . La valoración de las respuestas mínimas incluye : N9: respuesta periférica. Electrodo activo en punto de Erb ipsilateral a la estimulación. Electrodo de referencia en Fz-FPZ o punto de Erb contralateral a la estimulación. N13: cervical. Actividad postsináptica del cuerno posterior del cordón espinal. Electrodo activo en el proceso espinoso C7. Electrodo de referencia en la cervical anterior (sobre la glotis). P14: unión cervicomedular. Electrodo activo en C3-C4 según la estimulación ipsilateral. Electrodo de referencia del hombro, el lóbulo de la oreja o el punto de Erb ipsilateral a la estimulación. N20: corteza somatosensorial primaria. Electrodo activo en C3-C4 según la estimulación contralateral. Electrodo de referencia a FZ, C3-C4 ipsilateral a la estimulación. Se valoran las respuestas de tronco (P14) y corticales (potencial N20), cuya ausencia, dada la sensibilidad a la anoxia cerebral, es altamente sensible para un mal pronóstico neurológico o muerte encefálica . Como desventaja, la respuesta N20 puede estar alterada o no ser concluyente si existe de forma concomitante una lesión medular cervical y existe ausencia de N13 al realizarse de forma fraccionada , no es válido en las primeras 24 horas tras una parada cardiorrespiratoria y puede verse alterado por la hipotermia (período de cooling off ). Potenciales evocados auditivos de tronco Se obtienen por estímulos auditivos mediante clic alternante por medio de auriculares a 85 dB (nHL) de 0,1 ms de duración a 19,1 pps, que se valoran en los primeros 10 ms . Las respuestas se originan en el nervio auditivo y en las vías auditivas del tronco cerebral, desde el complejo nuclear coclear hasta el colículo inferior . Onda I: parte distal del nervio acústico o coclear. Onda II: parte proximal del nervio acústico. Onda III: unión bulboprotuberancial (también puede incluir núcleos cocleares y cuerpo trapezoide). Ondas IV-V: reflejan la propagación de la conducción del estímulo a lo largo del lemnisco lateral hasta el tubérculo cuadrigémino posterior. Para el montaje se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subdérmica en Cz y lóbulos de las orejas derecho e izquierdo (A2-A1). El electrodo activo será A2 si la estimulación es derecha y o A1 si es izquierda, y Cz de referencia . El tiempo de análisis es de 10 ms/pantalla; la sensibilidad, de 10 µV/división; y los filtros, de 100 a 3 Hz. Se recomiendan dos bloques de promediación de 1.000 respuestas . Hay que tener especial consideración en las posibles lesiones previas de la cóclea y del nervio auditivo, que pueden darse por hipoxia o mecanismos isquémicos. En lesiones traumáticas, puede existir ausencia de las respuestas por una fractura translaberíntica . Para una correcta valoración de las respuestas es necesario tener presente la onda I para darlo como presente. Para la muerte encefálica, el criterio diagnóstico es la valoración de la ausencia bilateral de las respuestas del tronco encefálico a la corteza auditiva cerebral ; sin embargo, como hemos descrito, es una técnica muy sensible, pero poco específica. Como desventajas está que puede alterarse por lesiones del VIII par craneal, lesiones del tronco cerebral . Las ondas I-III-IV pueden estar alteradas, sobre todo, por la influencia previa de medicación ototóxica. La hipotermia puede inducir un incremento en las latencias absolutas de las ondas. Potenciales evocados visuales El objetivo es valorar sobre la corteza occipital la respuesta de estímulos no estructurados mediante pantallas de diodos (leds) montados sobre gafas (goggles) . Obtenemos lo que se denomina potenciales evocados visuales flash . Exploran la vía visual desde las células ganglionares que forman el nervio óptico hasta la actividad de la corteza visual occipital . Se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subcutáneos en Fz y Oz, con electrodo de referencia o tierra en Cz, en A1 o A2 (o promedio A1-A2). El registro se realiza sobre Fz-Oz, valorando la respuesta de la segunda onda electropositiva o P2 con latencias variables de hasta 120 ms, aunque similar a la obtenida con estímulos estructurados Pattern (onda P100) . El tiempo de análisis es de 300 ms, con dos bloques de promediación de, al menos, 100 respuestas cada uno . Para la muerte encefálica, se valora la ausencia bilateral de respuestas en la corteza visual, con preservación de las respuestas en el electrorretinograma. Puede alterarse por lesiones del nervio óptico o de la retina, por la sedación profunda y, de la misma forma que los potenciales auditivos, es una técnica sensible, aunque poco específica, por lo que es necesario utilizarla de forma multimodal. Se utilizan para valorar la integridad funcional de la vía somatosensorial por medio del estímulo de un nervio periférico, que genera una secuencia de potenciales en distintos puntos de la vía nerviosa y en la corteza cerebral. El más utilizado es el potencial evocado somatosensorial de nervio mediano (nervio mixto) en la muñeca. Para ello, es importante el acceso al sitio de estimulación (si no es posible la muñeca, valore estimular el codo). El estímulo consiste en una corriente de onda cuadrada con pulsos cortos (200 µs), a baja frecuencia (2-3 Hz) y con una intensidad fijada en el valor máximo (aproximadamente, 20 mA) . Se recomienda su realización con cucharilla de cloruro de plata con pasta conductora o de aguja en todos los niveles . Los parámetros de registro son: pantalla de 0,5 a 20 µV/división con un tiempo de grabación total de 50 ms (5 ms/división); filtro de paso alto de menos de 3 Hz y filtro de paso bajo de más de 2.000 Hz; y dos bloques de promediación de, al menos, 500 repeticiones . La valoración de las respuestas mínimas incluye : N9: respuesta periférica. Electrodo activo en punto de Erb ipsilateral a la estimulación. Electrodo de referencia en Fz-FPZ o punto de Erb contralateral a la estimulación. N13: cervical. Actividad postsináptica del cuerno posterior del cordón espinal. Electrodo activo en el proceso espinoso C7. Electrodo de referencia en la cervical anterior (sobre la glotis). P14: unión cervicomedular. Electrodo activo en C3-C4 según la estimulación ipsilateral. Electrodo de referencia del hombro, el lóbulo de la oreja o el punto de Erb ipsilateral a la estimulación. N20: corteza somatosensorial primaria. Electrodo activo en C3-C4 según la estimulación contralateral. Electrodo de referencia a FZ, C3-C4 ipsilateral a la estimulación. Se valoran las respuestas de tronco (P14) y corticales (potencial N20), cuya ausencia, dada la sensibilidad a la anoxia cerebral, es altamente sensible para un mal pronóstico neurológico o muerte encefálica . Como desventaja, la respuesta N20 puede estar alterada o no ser concluyente si existe de forma concomitante una lesión medular cervical y existe ausencia de N13 al realizarse de forma fraccionada , no es válido en las primeras 24 horas tras una parada cardiorrespiratoria y puede verse alterado por la hipotermia (período de cooling off ). Se obtienen por estímulos auditivos mediante clic alternante por medio de auriculares a 85 dB (nHL) de 0,1 ms de duración a 19,1 pps, que se valoran en los primeros 10 ms . Las respuestas se originan en el nervio auditivo y en las vías auditivas del tronco cerebral, desde el complejo nuclear coclear hasta el colículo inferior . Onda I: parte distal del nervio acústico o coclear. Onda II: parte proximal del nervio acústico. Onda III: unión bulboprotuberancial (también puede incluir núcleos cocleares y cuerpo trapezoide). Ondas IV-V: reflejan la propagación de la conducción del estímulo a lo largo del lemnisco lateral hasta el tubérculo cuadrigémino posterior. Para el montaje se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subdérmica en Cz y lóbulos de las orejas derecho e izquierdo (A2-A1). El electrodo activo será A2 si la estimulación es derecha y o A1 si es izquierda, y Cz de referencia . El tiempo de análisis es de 10 ms/pantalla; la sensibilidad, de 10 µV/división; y los filtros, de 100 a 3 Hz. Se recomiendan dos bloques de promediación de 1.000 respuestas . Hay que tener especial consideración en las posibles lesiones previas de la cóclea y del nervio auditivo, que pueden darse por hipoxia o mecanismos isquémicos. En lesiones traumáticas, puede existir ausencia de las respuestas por una fractura translaberíntica . Para una correcta valoración de las respuestas es necesario tener presente la onda I para darlo como presente. Para la muerte encefálica, el criterio diagnóstico es la valoración de la ausencia bilateral de las respuestas del tronco encefálico a la corteza auditiva cerebral ; sin embargo, como hemos descrito, es una técnica muy sensible, pero poco específica. Como desventajas está que puede alterarse por lesiones del VIII par craneal, lesiones del tronco cerebral . Las ondas I-III-IV pueden estar alteradas, sobre todo, por la influencia previa de medicación ototóxica. La hipotermia puede inducir un incremento en las latencias absolutas de las ondas. El objetivo es valorar sobre la corteza occipital la respuesta de estímulos no estructurados mediante pantallas de diodos (leds) montados sobre gafas (goggles) . Obtenemos lo que se denomina potenciales evocados visuales flash . Exploran la vía visual desde las células ganglionares que forman el nervio óptico hasta la actividad de la corteza visual occipital . Se utilizan electrodos de cucharilla de cloruro de plata con pasta conductora o de aguja subcutáneos en Fz y Oz, con electrodo de referencia o tierra en Cz, en A1 o A2 (o promedio A1-A2). El registro se realiza sobre Fz-Oz, valorando la respuesta de la segunda onda electropositiva o P2 con latencias variables de hasta 120 ms, aunque similar a la obtenida con estímulos estructurados Pattern (onda P100) . El tiempo de análisis es de 300 ms, con dos bloques de promediación de, al menos, 100 respuestas cada uno . Para la muerte encefálica, se valora la ausencia bilateral de respuestas en la corteza visual, con preservación de las respuestas en el electrorretinograma. Puede alterarse por lesiones del nervio óptico o de la retina, por la sedación profunda y, de la misma forma que los potenciales auditivos, es una técnica sensible, aunque poco específica, por lo que es necesario utilizarla de forma multimodal. Como se ha comentado detenidamente, en cada técnica existen factores limitantes de las técnicas neurofisiológicas en el diagnóstico de muerte encefálica: Presencia de hipotermia. Medicación que pueda interferir en los tiempos de observación. Paciente con alteraciones metabólicas. Lesiones en el scalp o traumatismos importantes del macizo craneofacial. Limitación técnica para la desconexión de equipos que generen artefactos especialmente eléctricos. El diagnóstico de muerte encefálica desde el punto de vista neurofisiológico precisa la unificación de criterios, tanto clínicos como técnicos. La realización de las exploraciones neurofisiológicas puede ser crucial en el apoyo y en la confirmación del diagnóstico de muerte encefálica, especialmente para reducir los tiempos de observación o en el caso de que no sean posibles otras técnicas diagnósticas, como las de flujo. El EEG es la prueba neurofisiológica más fiable y accesible como test de apoyo diagnóstico; sin embargo, requiere destreza del facultativo que interpreta la prueba, ausencia de medicación que pueda influir en el registro, control de la temperatura corporal, así como parámetros hemodinámicos adecuados. Su realización debe ajustarse a cada grupo de edad y siempre ha de cumplir unos estándares de calidad óptimos. Los potenciales evocados también pueden realizar una aportación multimodal junto con el registro electroencefalográfico. Este documento es el primer consenso actualizado en castellano, de la Sociedad de Neurofisiología Clínica de las Comunidades de Valencia y Murcia, para la correcta utilización de las técnicas neurofisiológicas en el diagnóstico de muerte encefálica. |
Evaluating the impact of a multimodal perinatal education program on birth preparedness, mental health, and birth outcomes among rural primiparous women: a retrospective cohort study | 56aacc9c-f614-48f5-84a6-8e51ffb72b4f | 11756159 | Patient Education as Topic[mh] | Maternal and neonatal morbidity and mortality remain significant public health challenges globally, despite advancements in medical practices and healthcare policies. The World Health Organization (WHO) estimated that approximately 28,700 women died during and following pregnancy and childbirth in 2020, with a substantial number of deaths occurring in low-resource settings . One critical factor contributing to these adverse outcomes is the lack of maternal knowledge regarding perinatal danger signs as well as inadequate birth preparedness and complication readiness (BPCR) . Rural primiparous women, experiencing childbirth for the first time, are particularly vulnerable due to potential gaps in knowledge, access to healthcare resources, and support systems . This knowledge deficit can lead to delayed decision-making, late arrival at health facilities, and subsequent adverse outcomes . In light of these challenges, educational interventions have emerged as a pivotal strategy in maternal and neonatal health programs. Multimodal education, which encompasses a variety of methods such as face-to-face workshops, digital information platforms, and personalized counseling sessions, has the potential to significantly improve maternal health literacy . These comprehensive programs are designed to enhance women’s understanding of pregnancy-related danger signs, the necessity of skilled birth attendance, and the steps involved in BPCR. Several studies have demonstrated the effectiveness of educational interventions in improving pregnancy outcomes . Çankaya et al. indicated that antenatal education increased the likelihood of vaginal birth and improved women’s psychological well-being . Therefore, this study aims to evaluate the impact of a multimodal perinatal education program on birth preparedness, perinatal depression, anxiety, and birth outcomes among rural primiparous women. By employing a retrospective cohort design and utilizing data from a rural hospital, this research provides valuable insights into the effectiveness of holistic perinatal education in a rural setting. The findings of this study have the potential to inform future interventions and policies, contributing to the improvement of maternal and neonatal health in rural communities. Study design and participants This research employed a retrospective cohort design to assess the implications of a holistic perinatal education program on birth preparedness, perinatal depression, anxiety, and birth outcomes among rural primiparous women. The study utilized patient records and archived data collected between January 2021 and December 2022 from selected hospital. From the records, primiparous women aged between 18–49 years holding a ‘rural hukou’ (rural household registration) were identified. The dataset included women who had attended at least one antenatal care (ANC) session during the defined period and had given birth to a live singleton baby within this time frame. Those primiparous women who has incomplete medical records or lack of essential data points or transferred out or received the majority of their prenatal care outside the selected healthcare institution are exclude from this study. To be included in the Control Group, participants needed to have received routine prenatal care without exposure to the multimodal education program. Women in the Intervention Group were required to participate in all three components of the multimodal perinatal education program to ensure consistent exposure in addition to receiving routine prenatal care. Specifically, they had to attend at least one in-person workshops, confirm their use of the digital learning platform, and complete at least one personalized counseling session with a maternal health professional. After applying the inclusion and exclusion criteria, a total of 149 rural primiparous women were identified. They were then categorized into two groups: 77 participants in the Intervention Group, who were exposed to the perinatal education program, and 72 in the Control Group, who were not. (Fig. ). Authorized personnel retrospectively retrieved pertinent data from the institution’s electronic medical record system. This encompassed demographic details, ANC visit specifics, interventions administered, and resultant outcomes. Multimodal perinatal education program The multimodal perinatal education program was a carefully curated curriculum designed to offer a holistic approach to maternal education. The program was designed to provide a holistic approach to maternal education. It combined physical workshops, an online digital learning platform, and tailored counseling sessions. Physical Workshops were interactive sessions where participants could get hands-on training by experienced obstetricians and maternal health nurses. The workshops covered essential topics like labor processes, pain management techniques, breathing exercises, and postnatal care. Each workshop was facilitated by experienced obstetricians and maternal health nurses. These workshops typically lasted 2–3 h and were conducted in small groups of 10–15 participants, allowing for individualized attention and engagement. The content across workshops was standardized to ensure consistency in information delivery, participants were encouraged to attend at least one session during pregnancy. The digital learning platform offered an online repository of e-learning modules that included video tutorials on prenatal care, interactive knowledge assessments, and community forums for discussions and experience sharing. These modules ranged from video tutorials on pregnancy care to interactive quizzes assessing their knowledge. The digital platform also offered forums where participants could discuss their concerns and share experiences. The platform was user-friendly and accessible via mobile devices. Any questions or concerns arising during engagement with the digital platform were addressed during the personalized counseling sessions, ensuring participants received tailored and practical solutions. Personalized counseling sessions were arranged to cater to the unique needs of each pregnancy. Recognizing that every pregnancy is unique, individual counseling sessions were organized. These sessions, helmed by experienced obstetric nurses, offered personalized guidance. They addressed specific concerns, doubts, and provided emotional support. Conducted by trained maternal health professionals, the sessions lasted approximately 30–60 min and covered individualized birth plans, mental health concerns, and other specific issues raised by the participants. Counseling sessions were typically scheduled once per trimester, participants were encouraged to attend at least one session during pregnancy. Standard prenatal care The Control Group received standard prenatal care, which included routine ANC visits adhering to established clinical guidelines . We encourage primiparous women to attand 7 to 11 ANC sessions at specific gestational weeks: 6 ∼ 13 weeks + 6 days, 14 ∼ 19 weeks + 6 days, 20 ∼ 24 weeks, 25 ∼ 28 weeks, 29 ∼ 32 weeks, 33 ∼ 36 weeks, and 37 ∼ 41 weeks. For high-risk pregnancies, an increased number of visits is advised. Standard ANC examinations primarily included physical assessments, obstetric evaluations, and laboratory tests such as blood and urine analyses, liver and renal function tests, and screenings for hepatitis B, syphilis, and HIV. However, participation in the recommended visits remained voluntary, as there were no enforced mandates requiring women to strictly adhere to the suggested schedules. Outcome measures Data were retrieved from the medical records of the participants. The extracted data covered demographics, BPCR scores, mental health metrics, and birth outcomes. BPCR : This measure gauges a woman’s proactive approach to childbirth and potential complications. The BPCR assessment is split into two facets: Knowledge and Practice. The knowledge scores assessed a woman’s understanding of key aspects of birth preparedness and potential complications. It evaluates her comprehension of potential danger signs, the importance of identifying a birth location, the necessity to save money for birth-related expenses, the role of transportation, and the importance of having emergency contacts. Participants were given a questionnaire, and their knowledge was scored on a scale, with a possible maximum score of 10. A higher score indicates better knowledge. The practice scores pertained to the practical steps a woman has taken in preparation for childbirth and potential complications. It includes actions like actually identifying a birth location, saving money for birth-related expenses, arranging transportation, and having emergency contacts listed. Participants’ practices were scored on a scale, again with a potential maximum of 10. A higher score indicates better preparedness in terms of actual practices. Mental health was evaluated using standardized tools: the Beck Depression Inventory-Second Edition (BDI-II) for depressive symptoms and the State Trait Anxiety Inventory-State Anxiety scale (STAI-S) for anxiety symptoms. Both tools have been validated in numerous populations and contexts, offering credible metrics for depression and anxiety, respectively. BDI-II is a widely recognized self-report inventory for measuring the severity of depression in adults and adolescents aged 13 and older . It consists of 21 multiple-choice questions, each designed to assess a specific symptom common among people with major depressive disorders. The subjects are asked to respond to each question based on a two-week time period (rather than on today only). Each of the 21 items is scored on a scale of 0 to 3, with a total possible score ranging from 0 to 63. STAI-S is a standardized tool used to measure state anxiety, or anxiety about an event, and trait anxiety, or anxiety level as a personal characteristic . For the purposes of this study, we used the state anxiety portion (STAI-S), which evaluates the current state of anxiety, asking how respondents feel “right now,” using items that measure subjective feelings of apprehension, tension, nervousness, and worry. The STAI-S has 20 items, each scored on a scale of 1 to 4. Therefore, scores for the STAI-S can range from 20 to 80, with higher scores indicating greater anxiety. Birth outcomes encompassed various parameters, including the mode of delivery (vaginal or cesarean), reasons for cesarean delivery (mother-related, fetus-related, or other reasons), gestational age at birth (categorized by weeks), the method of placenta delivery (natural or artificial), birth weight (categorized by weight in grams), Neonatal Apgar Score at 5 min, neonatal complications, NICU admission, preterm birth occurrence, and breastfeeding initiation (defined as the act of feeding the newborn with breast milk within the first 24 h following delivery). Statistical analysis Data was meticulously gathered, meticulously curated, and subsequently analyzed using SPSS software (version 25.0). To ascertain the normality of the data distribution, the Kolmogorov-Smirnov test was utilized. Based on the distribution of the data, descriptive statistics were presented either as mean ± standard deviation (for normally distributed data) or as median along with the 25% and 75% quartile range (for non-normally distributed data). For comparison between the two groups (Intervention and Control), independent samples t-tests were applied for continuous variables that followed a normal distribution, while the Mann-Whitney U test was employed for non-normally distributed continuous variables. For categorical variables, the Chi-square test or Fisher’s exact test was used, as appropriate. To assess changes within each group over time, paired samples t-tests were conducted for normally distributed data, and the Wilcoxon signed-rank test was utilized for non-normally distributed data. The association between categorical variables over time was analyzed using the McNemar test. All statistical tests were two-tailed, and P values less than 0.05 were regarded as indicators of statistically significant differences. This research employed a retrospective cohort design to assess the implications of a holistic perinatal education program on birth preparedness, perinatal depression, anxiety, and birth outcomes among rural primiparous women. The study utilized patient records and archived data collected between January 2021 and December 2022 from selected hospital. From the records, primiparous women aged between 18–49 years holding a ‘rural hukou’ (rural household registration) were identified. The dataset included women who had attended at least one antenatal care (ANC) session during the defined period and had given birth to a live singleton baby within this time frame. Those primiparous women who has incomplete medical records or lack of essential data points or transferred out or received the majority of their prenatal care outside the selected healthcare institution are exclude from this study. To be included in the Control Group, participants needed to have received routine prenatal care without exposure to the multimodal education program. Women in the Intervention Group were required to participate in all three components of the multimodal perinatal education program to ensure consistent exposure in addition to receiving routine prenatal care. Specifically, they had to attend at least one in-person workshops, confirm their use of the digital learning platform, and complete at least one personalized counseling session with a maternal health professional. After applying the inclusion and exclusion criteria, a total of 149 rural primiparous women were identified. They were then categorized into two groups: 77 participants in the Intervention Group, who were exposed to the perinatal education program, and 72 in the Control Group, who were not. (Fig. ). Authorized personnel retrospectively retrieved pertinent data from the institution’s electronic medical record system. This encompassed demographic details, ANC visit specifics, interventions administered, and resultant outcomes. The multimodal perinatal education program was a carefully curated curriculum designed to offer a holistic approach to maternal education. The program was designed to provide a holistic approach to maternal education. It combined physical workshops, an online digital learning platform, and tailored counseling sessions. Physical Workshops were interactive sessions where participants could get hands-on training by experienced obstetricians and maternal health nurses. The workshops covered essential topics like labor processes, pain management techniques, breathing exercises, and postnatal care. Each workshop was facilitated by experienced obstetricians and maternal health nurses. These workshops typically lasted 2–3 h and were conducted in small groups of 10–15 participants, allowing for individualized attention and engagement. The content across workshops was standardized to ensure consistency in information delivery, participants were encouraged to attend at least one session during pregnancy. The digital learning platform offered an online repository of e-learning modules that included video tutorials on prenatal care, interactive knowledge assessments, and community forums for discussions and experience sharing. These modules ranged from video tutorials on pregnancy care to interactive quizzes assessing their knowledge. The digital platform also offered forums where participants could discuss their concerns and share experiences. The platform was user-friendly and accessible via mobile devices. Any questions or concerns arising during engagement with the digital platform were addressed during the personalized counseling sessions, ensuring participants received tailored and practical solutions. Personalized counseling sessions were arranged to cater to the unique needs of each pregnancy. Recognizing that every pregnancy is unique, individual counseling sessions were organized. These sessions, helmed by experienced obstetric nurses, offered personalized guidance. They addressed specific concerns, doubts, and provided emotional support. Conducted by trained maternal health professionals, the sessions lasted approximately 30–60 min and covered individualized birth plans, mental health concerns, and other specific issues raised by the participants. Counseling sessions were typically scheduled once per trimester, participants were encouraged to attend at least one session during pregnancy. The Control Group received standard prenatal care, which included routine ANC visits adhering to established clinical guidelines . We encourage primiparous women to attand 7 to 11 ANC sessions at specific gestational weeks: 6 ∼ 13 weeks + 6 days, 14 ∼ 19 weeks + 6 days, 20 ∼ 24 weeks, 25 ∼ 28 weeks, 29 ∼ 32 weeks, 33 ∼ 36 weeks, and 37 ∼ 41 weeks. For high-risk pregnancies, an increased number of visits is advised. Standard ANC examinations primarily included physical assessments, obstetric evaluations, and laboratory tests such as blood and urine analyses, liver and renal function tests, and screenings for hepatitis B, syphilis, and HIV. However, participation in the recommended visits remained voluntary, as there were no enforced mandates requiring women to strictly adhere to the suggested schedules. Data were retrieved from the medical records of the participants. The extracted data covered demographics, BPCR scores, mental health metrics, and birth outcomes. BPCR : This measure gauges a woman’s proactive approach to childbirth and potential complications. The BPCR assessment is split into two facets: Knowledge and Practice. The knowledge scores assessed a woman’s understanding of key aspects of birth preparedness and potential complications. It evaluates her comprehension of potential danger signs, the importance of identifying a birth location, the necessity to save money for birth-related expenses, the role of transportation, and the importance of having emergency contacts. Participants were given a questionnaire, and their knowledge was scored on a scale, with a possible maximum score of 10. A higher score indicates better knowledge. The practice scores pertained to the practical steps a woman has taken in preparation for childbirth and potential complications. It includes actions like actually identifying a birth location, saving money for birth-related expenses, arranging transportation, and having emergency contacts listed. Participants’ practices were scored on a scale, again with a potential maximum of 10. A higher score indicates better preparedness in terms of actual practices. Mental health was evaluated using standardized tools: the Beck Depression Inventory-Second Edition (BDI-II) for depressive symptoms and the State Trait Anxiety Inventory-State Anxiety scale (STAI-S) for anxiety symptoms. Both tools have been validated in numerous populations and contexts, offering credible metrics for depression and anxiety, respectively. BDI-II is a widely recognized self-report inventory for measuring the severity of depression in adults and adolescents aged 13 and older . It consists of 21 multiple-choice questions, each designed to assess a specific symptom common among people with major depressive disorders. The subjects are asked to respond to each question based on a two-week time period (rather than on today only). Each of the 21 items is scored on a scale of 0 to 3, with a total possible score ranging from 0 to 63. STAI-S is a standardized tool used to measure state anxiety, or anxiety about an event, and trait anxiety, or anxiety level as a personal characteristic . For the purposes of this study, we used the state anxiety portion (STAI-S), which evaluates the current state of anxiety, asking how respondents feel “right now,” using items that measure subjective feelings of apprehension, tension, nervousness, and worry. The STAI-S has 20 items, each scored on a scale of 1 to 4. Therefore, scores for the STAI-S can range from 20 to 80, with higher scores indicating greater anxiety. Birth outcomes encompassed various parameters, including the mode of delivery (vaginal or cesarean), reasons for cesarean delivery (mother-related, fetus-related, or other reasons), gestational age at birth (categorized by weeks), the method of placenta delivery (natural or artificial), birth weight (categorized by weight in grams), Neonatal Apgar Score at 5 min, neonatal complications, NICU admission, preterm birth occurrence, and breastfeeding initiation (defined as the act of feeding the newborn with breast milk within the first 24 h following delivery). Data was meticulously gathered, meticulously curated, and subsequently analyzed using SPSS software (version 25.0). To ascertain the normality of the data distribution, the Kolmogorov-Smirnov test was utilized. Based on the distribution of the data, descriptive statistics were presented either as mean ± standard deviation (for normally distributed data) or as median along with the 25% and 75% quartile range (for non-normally distributed data). For comparison between the two groups (Intervention and Control), independent samples t-tests were applied for continuous variables that followed a normal distribution, while the Mann-Whitney U test was employed for non-normally distributed continuous variables. For categorical variables, the Chi-square test or Fisher’s exact test was used, as appropriate. To assess changes within each group over time, paired samples t-tests were conducted for normally distributed data, and the Wilcoxon signed-rank test was utilized for non-normally distributed data. The association between categorical variables over time was analyzed using the McNemar test. All statistical tests were two-tailed, and P values less than 0.05 were regarded as indicators of statistically significant differences. Baseline characteristics of participants A total of 149 rural primiparous women were enrolled, with 77 participants in the Intervention Group and 72 in the Control Group. Both groups showed no significant difference in age, with the Intervention Group having a mean age of 31.16 ± 6.37 years and the Control Group with 31.65 ± 7.42 years ( P = 0.661). Other demographic details, such as residence type, marital status, educational level, monthly income, religious affiliation, stage of pregnancy at first ANC visit, and number of ANC visits before recruitment, were also comparable between the groups (Table ). BPCR practice and knowledge scores Post-intervention evaluations revealed a significant enhancement in the BPCR knowledge scores for the Intervention Group, moving from a median score of 6.0 (4.0, 8.0) pre-intervention to 7.0 (5.0, 9.0) post-intervention ( P < 0.001). In comparison, the Control Group’s scores remained relatively stable, starting at 7.0 (5.0, 8.0) and concluding at 7.0 (5.0, 9.0) ( P = 0.208). BPCR practice scores also demonstrated improvements in the Intervention Group, with scores rising from 5.0 (3.0, 8.0) to 6.0 (4.0, 8.0) ( P = 0.047). The Control Group showed a slight increase, but not as pronounced as the Intervention Group, as outlined in Table . Maternal depressive and anxiety symptoms over time Throughout the study’s duration, the Intervention Group consistently exhibited a reduction in depressive and anxiety symptoms across the various time points. Notably, the median BDI-II score in the Intervention Group decreased from 18 (15, 22) at baseline to 11 (9, 16) by the 6-month follow-up ( P = 0.021*). Similarly, the STAI-S scores for anxiety showcased a decline from 45 (40, 52) at baseline to 35 (30, 42) at the 6-month follow-up ( P = 0.017*). These favorable shifts contrast with the Control Group’s relatively stable scores during the same period, as detailed in Table . Birth outcomes A notable difference was observed in the mode of delivery, with 57.1% of the Intervention Group opting for vaginal births, compared to 44.4% in the Control Group, rendering this difference statistically significant ( P = 0.035). Reasons for cesarean deliveries, particularly those related to the mother, were more frequent in the Control Group at 57.5% than the 51.5% observed in the Intervention Group ( P = 0.021). Comprehensive birth outcomes, including birth weight distributions and neonatal outcomes, are elaborated upon in Table . A total of 149 rural primiparous women were enrolled, with 77 participants in the Intervention Group and 72 in the Control Group. Both groups showed no significant difference in age, with the Intervention Group having a mean age of 31.16 ± 6.37 years and the Control Group with 31.65 ± 7.42 years ( P = 0.661). Other demographic details, such as residence type, marital status, educational level, monthly income, religious affiliation, stage of pregnancy at first ANC visit, and number of ANC visits before recruitment, were also comparable between the groups (Table ). Post-intervention evaluations revealed a significant enhancement in the BPCR knowledge scores for the Intervention Group, moving from a median score of 6.0 (4.0, 8.0) pre-intervention to 7.0 (5.0, 9.0) post-intervention ( P < 0.001). In comparison, the Control Group’s scores remained relatively stable, starting at 7.0 (5.0, 8.0) and concluding at 7.0 (5.0, 9.0) ( P = 0.208). BPCR practice scores also demonstrated improvements in the Intervention Group, with scores rising from 5.0 (3.0, 8.0) to 6.0 (4.0, 8.0) ( P = 0.047). The Control Group showed a slight increase, but not as pronounced as the Intervention Group, as outlined in Table . Throughout the study’s duration, the Intervention Group consistently exhibited a reduction in depressive and anxiety symptoms across the various time points. Notably, the median BDI-II score in the Intervention Group decreased from 18 (15, 22) at baseline to 11 (9, 16) by the 6-month follow-up ( P = 0.021*). Similarly, the STAI-S scores for anxiety showcased a decline from 45 (40, 52) at baseline to 35 (30, 42) at the 6-month follow-up ( P = 0.017*). These favorable shifts contrast with the Control Group’s relatively stable scores during the same period, as detailed in Table . A notable difference was observed in the mode of delivery, with 57.1% of the Intervention Group opting for vaginal births, compared to 44.4% in the Control Group, rendering this difference statistically significant ( P = 0.035). Reasons for cesarean deliveries, particularly those related to the mother, were more frequent in the Control Group at 57.5% than the 51.5% observed in the Intervention Group ( P = 0.021). Comprehensive birth outcomes, including birth weight distributions and neonatal outcomes, are elaborated upon in Table . This study evaluated the impact of a comprehensive multimodal perinatal education program on birth preparedness, perinatal mental health, and birth outcomes among rural primiparous women, demonstrating its significant benefits in enhancing birth preparedness and reducing perinatal depression and anxiety symptoms. Birth preparedness is a crucial component of maternal health, encompassing a woman’s knowledge and readiness for childbirth and potential complications. Adequate birth preparedness has been linked to improved maternal and neonatal outcomes, highlighting its importance in perinatal care . However, disparities in access to perinatal education and resources are prevalent, particularly in rural settings, leading to gaps in knowledge and preparedness . It is reported that higher levels of maternal education have been associated with reduced maternal mortality and better overall maternal health . In this study, the intervention group demonstrated a significant improvement in BPCR knowledge and practice scores, suggesting the effectiveness of comprehensive education strategies in rural maternal health. This was consistent with studies suggesting that health education can significantly improve knowledge and health behaviors of primiparous women . This enhancement might be attributed to the multimodal approach of the perinatal education program, which provided diverse platforms for learning and engagement, thereby catering to different learning preferences and ensuring a broader reach. Mental health is an important component of maternal health. Perinatal depression and anxiety, as common mental health conditions, have detrimental effects on both the mother and the developing fetus. These conditions have been associated with adverse outcomes such as preterm birth, low birth weight, and developmental delays . Rural women are at an increased risk of experiencing perinatal mental health issues due to factors such as limited access to mental health services, and socioeconomic challenges . Addressing these mental health challenges is crucial, as early intervention has been shown to mitigate long-term adverse effects . In this study, significant reduction in maternal depressive and anxiety symptoms was observed in the intervention group. These findings were congruent with existing literatures that supported the role of antenatal education in reducing perinatal mood disorders . The program’s inclusion of tailored counseling sessions likely provided the necessary psychosocial support, potentially creating a sense of empowerment and resilience among the participants. Moreover, this study also revealed a significant difference in the mode of delivery, with a higher percentage of vaginal births in the intervention group, which might be related to increased confidence and preparedness for childbirth following the education program. Research supported that antenatal education was associated with increased rates of natural childbirth due to enhanced knowledge and coping mechanisms . The lower incidence of cesarean sections, especially those due to maternal factors in the intervention group, further suggested the program’s effectiveness in enhancing the women’s physical and psychological readiness for childbirth . Interestingly, both groups exhibited high breastfeeding rates with no significant differences, possibly reflecting the pre-existing strong breastfeeding awareness and practices within this population. In rural China, traditional beliefs and cost-saving motives often encourage the majority of mothers to choose breastfeeding . These findings emphasize the importance of integrating multimodal perinatal education programs into standard prenatal care, particularly in resource-limited settings. Compared to urban women, rural women often exhibit significantly lower levels of perinatal knowledge, which can lead to inadequate birth preparedness and poorer maternal and neonatal outcomes. Such programs have been demonstrated to effectively address critical knowledge gaps and mental health challenges among underserved populations, policymakers should consider supporting the implementation and scaling of such programs to address disparities in maternal health education and mental health support . From a healthcare institution perspective, the implementation of flexible multimodal strategies is essential to effectively address the diverse needs of rural and economically disadvantaged populations. These strategies not only improve accessibility to perinatal education but also cater to a variety of learning preferences, thereby promoting broader engagement and effectiveness. Research demonstrates that multimodal approaches enhance both participation and learning outcomes in resource-limited settings . Training healthcare providers to deliver these interventions effectively is vital, well-trained providers can address resource constraints and ensure the successful implementation of multimodal education programs, even in resource-limited settings . This study also has certain limitations. Its retrospective cohort design may not be able to control for all confounding factors, such as social support, a critical factor influencing maternal and neonatal outcomes, which may affect the reliability of the results. Besides, a large number of participants were excluded due to the lack of relevant clinical indicators or missing data, which might have introduced selection bias. The lack of endpoint data could be attributed to low patient compliance, as some participants might have lacked health awareness or education and failed to complete necessary assessments or examinations. Additionally, others may have been transferred to other medical institutions for delivery or follow-up care, resulting in data exclusion from this study. Another limitation is the variation in the stage of pregnancy at the first ANC visit and the number of ANC visits prior to recruitment. While all participants in the Intervention Group completed the required program components, those who joined later in their pregnancy may have engaged in a condensed timeframe, which could have influenced their overall adherence and the intervention’s impact. Another limitation was the inability to systematically track participant engagement across all components of the multimodal perinatal education program, precise usage metrics for the digital learning platform were unavailable, most participants attended only the minimum required sessions for physical workshops and personalized counseling, limiting the variability needed for a detailed analysis of the relative contributions of each program component. Future studies should employ prospective designs that allow for systematic tracking of engagement levels and their impact on outcomes. Finally, the reliance on self-reported data via questionnaires might introduce subjective biases. Future research should aim to include randomized controlled trial designs in a larger and more diverse population to draw stronger conclusions. In conclusion, this study supported the implementation of multimodal perinatal education programs in rural settings to enhance birth preparedness, reduce perinatal depression and anxiety, and improve birth outcomes. These findings highlight that incorporating comprehensive perinatal education into standard prenatal care, especially in rural environments where access to information and health services may be limited, can greatly contribute to improving outcomes for pregnant women and newborns. |
Parameterization
of Physiologically Based Biopharmaceutics
Models: Workshop Summary Report | 12927976-019a-416e-9661-b7b372728270 | 11304397 | Pharmacology[mh] | Introduction The use of physiologically based biopharmaceutics models (PBBMs) to support the understanding of drug product (DP) quality attributes and the setting of clinically relevant specifications for their control is gaining importance, as shown in the growing number of submissions to regulatory authorities around the world and publications on this topic in the scientific community. The workshop “Physiologically Based Biopharmaceutics Modeling (PBBM) Best Scientific Practices for Drug Product Quality: Regulatory and Industry Perspectives” sponsored by FDA in collaboration with the University of Maryland Center of Excellence in Regulatory Science and Innovation (M-CERSI) was held on August 29–31, 2023 and facilitated the discussion on PBBM case studies together with specific day hot topics. This paper provides a summary report on Day 1 of this workshop, which focused on considerations for PBBM parametrization. The morning session included a keynote speech from Prof. Jennifer Dressman, the readout from regulatory agencies on the analysis of four submitted PBBM case studies, and a panel discussion focusing on how sponsors parametrized their models with in vitro inputs. During the afternoon session, five parallel breakout (BO) sessions covered the following topics: Solubility: Best practices for integration of solubility in PBBM Development of biopredictive dissolution methods: Best practices for data generation as input to PBBM Methods for integrating dissolution in PBBM: Best practices for modeling dissolution Precipitation: Best practices for integration of precipitation in PBBM Permeability: Best practices for integration of permeability in PBBM Morning Presentations 2.1 Introduction to the Workshop. Bhagwant Rege (FDA) FDA’s Office of Pharmaceutical Quality believes that everyone deserves to have confidence in their next dose of medicine and that pharmaceutical quality ensures the availability, safety, and efficacy of every dose. Biopharmaceutics is the link between DP quality and clinical performance in the patient. Patient centric quality standards (PCQSs) ensure that the DP consistently delivers clinical performance to the patient as described on the label in terms of safety and efficacy over its shelf life and from batch to batch. PCQSs can provide additional flexibility to pharmaceutical manufacturers while maintaining quality by establishing acceptance criteria based on clinical performance rather than process capability or manufacturing process control. PCQSs also avoid under- or overdiscriminating specifications which are not in the patient’s interests. The main obstacle to establishing PCQSs is a weak or often missing link between the in vitro and in vivo performances of the DPs. PBBM can help to overcome this obstacle. PBBM is a subset of Physiologically Based Pharmacokinetic (PBPK) models that are specific for biopharmaceutics applications. PBBM has more than 10 years of regulatory history. PBBM is mechanistic by nature because it integrates physicochemical properties of the drug, drug substance (DS), DP, the formulation composition, the route of administration, and the gastrointestinal (GI) physiology to predict in vivo exposures. PBBM can provide the crucial link between in vitro and in vivo performance of drug products to establish PCQS, which includes the dissolution method and acceptance criteria, dissolution safe space, and specifications for critical bioavailability attributes such as particle size distribution, polymorphism or crystalline content, granule properties, and manufacturing process parameters. PBBM can also provide supportive evidence for biowaivers including the biopharmaceutics classification system (BCS) based biowaivers and additional strength waivers as well as scientific bridging for 505(b)(2) products. FDA has cosponsored two workshops on PBBM in 2017 − and 2019. FDA also published the draft guidance on the use of PBPK analyses for biopharmaceutics applications in 2020. Currently, global regulatory acceptance of PBBM has some challenges. They include lack of the prospective PBBM strategy leading to inadequate model input, validation, and biologically implausible optimizations to fit model predictions to clinical data. A primary objective of this workshop was to discuss best practices on PBBM with respect to model input (in vitro and in vivo), model validation, and model applications; discuss new areas of PBBM applications such as generics and modified release (MR) products; and finally explore the areas of agreement between the industry and regulators for the future harmonization efforts. 2.2 Keynote Speech: PBBM: Impact and Future Perspective. Jennifer Dressman Prof. Jennifer Dressman kicked off the conference with a plenary lecture on the current status of PBBM for various routes of administration. She highlighted that the physiology at the given site of administration should be adequately captured and that release tests must be tailored to the specific site of application, as well as the dosage form applied. Modeling is then required to bring both of these aspects together and translate the results into a prediction of plasma and/or local concentration profiles. For modeling systemic levels, it is highly recommended to start with the disposition kinetics and compare the model against clinical intravenous (IV) data whenever possible. Probably, the most advanced PBBMs are those for oral drug delivery. Much data exists for the physiology of the GI tract, and quite sophisticated models are already available in the most frequently used software tools. One area in which we could do better is the modeling of GI motility, particularly in the fed state, which may have a large impact on the gastric distribution of the drug and consequently its gastric emptying. In the past few years, there has been a concerted effort across academic institutions to create biopharmaceutical tests which better mimic release from the formulation in the GI tract. − As a result, biorelevant media have largely replaced United State Pharmacopeia (USP) standard buffers as test media in pharmaceutical development. However, the most widely used equipment is still the USP Type 2 (Paddle) apparatus, and it remains to be seen whether other apparatuses can attain the same broad level of acceptance. Likewise, while assessing permeability by running bioavailability studies in animals has been largely replaced by studies in cell lines such as Caco-2 and Madin-Darby canine kidney (MDCK) cells, we still need better models for human permeability. To build a “digital twin”-based population pharmacokinetic (PK) model, the variability in physiology and its ramifications in terms of inter- and intraindividual variability in release rate and permeability must be taken into account. Efforts to mechanistically model both release from different types of dosage forms and drug permeability are already underway and have achieved some success. − Using ibuprofen as a test compound, creation of a robust in silico model to describe its dissolution under various conditions was demonstrated. Further, case examples showcased the joint impact of formulation and food on itraconazole PK and the joint impact of formulation and proton pump inhibitor (PPI) on AstraZeneca development compound PK. , Similar approaches have been used to build PBBMs for other routes of administration. For the dermal route, many different formulation types are available, and the choice of formulation will have a strong impact on the depth of permeation into (and beyond) the skin. The challenge lies in tailoring the release studies to the intended site of drug delivery. Like the GI tract, skin physiology is quite well understood, and the next tasks will be to capture changes in skin physiology with body location, patient age, ethnicity, and disease state. Nevertheless, PBBM has already progressed to the point where virtual bioequivalence (VBE) assessments of topical formulations are starting to gain acceptance at the regulatory level. For long-acting injectables, PBBMs are used to describe simultaneous release and biodegradation of polymeric vehicles, and there are also some recent advances made in biopharmaceutics evaluation e.g., the Dispersion Releaser. In the case of products that are inhaled, biopharmaceutical models include considerations of particle size and shape, with measurements of tissue permeability in the lung frequently being conducted in Calu-3 cells. In summary, PBBM has really picked up the pace in the past few years, and by 2030, it is likely that we will have reliable PBBM across a range of routes of administration. The advantages of PBBM are self-evident–with the physiological “digital twin” approach, we should be able to predict first-in-human levels better, as well as reduce the number and/or size of studies necessary to identify drug–drug interactions (DDI) and food effect interactions. The impact of PBBM will be more biowaivers based on VBE, application to “beyond the rule of five” drugs, and the reduction or even elimination of animal studies in formulation development, which will culminate in more effective medicines becoming available to patients sooner. 2.3 Case Study 1: A PBBM Based Dissolution Safe-Space for a BCS Class II Drug Substance. Shereeni Veerasingham and Arthur Okumu (HEALTH Canada) 2.3.1 Background PBBM was utilized to establish a dissolution safe-space for an immediate release (IR) tablet from Amgen containing a BCS Class II drug substance. The drug is a weak base, hydrochloride salt with a p K a of approximately 9. Following oral administration of the tablet, the maximum plasma concentration ( C max ) is achieved in approximately 6 h. Administration of the tablet with food increases the rate and extent of drug absorption, with a greater impact observed with a high-fat meal compared to a low-fat meal. The clinical knowledge space includes tablet variants that were evaluated in clinical bioequivalence studies, including a tablet variant that was found to be nonbioequivalent to the target profile. The nonbioequivalent tablet variant had a significantly slower in vitro dissolution profile than the target profile. PBBM based VBE trials were conducted to determine the in vitro dissolution edge of failure for bioequivalence and establish a dissolution safe-space for the tablet. The question of interest was, can the dissolution specification for the oral tablet be widened and still ensure bioequivalent in vivo performance? 2.3.2 Model Development The PBBM used the Advanced Compartmental Absorption and Transit (ACAT) model in GastroPlus (ver. 9.8.3, Simulations Plus Inc., Lancaster, CA). Changes were made to the default ACAT model based on literature research, in vitro data, and clinical observations to optimize simulations for the tablet. The disposition model was developed based on the physicochemical and biopharmaceutical properties and intravenous (IV) and oral PK data from 5 clinical studies. Initial Michaelis–Menten constant ( K m ) and maximum reaction velocity ( V max ) values for CYP3A4 and CYP1A2 were estimated by ADMET predictor (Simulations Plus Inc., Lancaster, CA). Clearance was determined by optimizing the K m and V max values to fit the observed clinical plasma concentration following the IV infusion of the drug at three different doses. During oral absorption model development, the effective permeability ( P eff ) was fitted to PK data for the oral solution obtained under fasting conditions and verified by comparison of the simulation for fed conditions to observed PK data. In addition, the percentage of fluid in the small intestine and colon were updated to 7.5% and 3%, respectively, to reflect values reported in the literature. The PK profile for the oral solution was simulated reasonably well ( , left panel). However, PK simulations for the tablet overpredicted the C max and underpredicted the time to the maximum concentration ( T max ) ( , middle panel). Further model refinement was therefore undertaken, considering that, due to a common ion effect, aqueous solubility of the drug (HCl salt) decreases in the presence of chloride ions. The aqueous solubility of the drug is relatively constant in the range of pH 3.5 to 5.0 and decreases at pH greater than 5.0. The in vivo pH-solubility profile was assumed to vary with formulation (solution or tablet), the volume of water administered with the tablet, and the prandial state. The in vitro and in vivo pH-solubility profiles were calculated using the Henderson–Hasselbalch equation and the estimated in vivo chloride ion concentration at the time of drug administration. Dissolution was assumed to be controlled by the diffusion of the drug through a stagnant film layer surrounding the dissolving particle as described by Pepin et al., 2019. In vitro dissolution rates were fitted to a theoretical product particle size distribution (P-PSD) and were validated by using P-PSD to predict dissolution at different pHs. The predicted dissolution profiles matched the measured profiles at pH 1.3, 2.0, and 4.5. However, at pH 6.8, the P-PSD and bulk pH/solubility overpredicted the dissolution rate. Using surface pH/solubility at pH 6.8 improved the prediction but resulted in a modest underprediction compared with the measured profile. The P-PSD values were used as input to simulate in vivo dissolution for the ACAT model. Due to the pH profile in the GI tract, supersaturation of the drug can occur, leading to precipitation. A mechanistic model based on classical nucleation theory was used to account for differences in the nucleation and growth rates for the oral solution and the tablet. Further, for the tablet simulations, the pH in the ascending colon was reduced from pH 6.8 to 4.86 based on the pH value obtained from an in vitro experiment. The reduction in pH accounts for microenvironmental pH effects of undissolved drug in the ascending colon, and the longer residence time and low chloride concentration are expected to allow for further drug dissolution and absorption. Simulation for tablet following refinement of the model indicated a good fit to the observed profile ( , right panel). 2.3.3 Model Validation and Application Model validation employed data sets that were independent from those used in model development and included a data set for a different formulation. The validation was based on single simulation comparisons to the observed PK profiles from three clinical studies. Additional validation included comparisons of simulations to PK profiles obtained from a food effect study (low-fat and high-fat meals) and a DDI study using ketoconazole as the perpetrator. Prespecified acceptance criteria were met for most studies, except for area under the concentration versus time curve (AUC) in one PK study (Average Fold Error (AFE): 1.35) and C max for the low-fat, low-calorie simulation (AFE: 1.27). Overall, the model validation was considered adequate for the intended use of the model to determine a dissolution safe space. Parameter sensitivity analysis (PSA) identified CYP3A4 metabolism kinetics, small intestine transit times, small intestine and colon fluid volumes, and ascending colon pH as key parameters with an impact on C max and exposure, assessed as the AUC. Prior to model application, the ability of the population simulation to capture the observed intersubject PK variability was evaluated. Parameters identified by the PSA as influential parameters were adjusted to account for intersubject and intrasubject differences. The simulated probability contours of the plasma concentration time profile across 10 population simulation trials mimicked the range of variability observed between subjects in the clinical data set. Conservative criteria for bioequivalence were set with a requirement that all trials (10 out of 10) needed to meet the bioequivalence criteria of the 90% confidence interval of the ratio of the test to reference C max and AUC within 80–125%. The ability of VBE trials to simulate observed clinical results was evaluated by using a tablet variant that was not bioequivalent to the target profile. The bioequivalence criteria were not met for 1 of 10 virtual trials, indicating agreement in the conclusions of the virtual trials and clinical studies. To define a safe-space, theoretical dissolution profiles were generated by altering the Weibull Ph1 fraction (f1). As f1 decreases, dissolution is slower with an increase in P-PSD, and PK simulations display a correspondingly lower C max . Simulated PK for the theoretical profiles was then compared to that of the reference tablet in VBE trials. Of note, model complexity and software limitations led to unsuccessful trial simulations for some subjects (simulations did not run to completion). Of 42 virtual subjects included in the trial, only the first 32 completed subjects for the reference formulation and corresponding subject simulation for the test formulation were used for the analysis. For the slowest f1 profile (f1-slow), 1 of 10 virtual trials did not meet bioequivalence criteria, with a C max ratio 90% CI < 80%. All f1 profiles faster than f1-slow were bioequivalent to the reference tablet. A dissolution safe space was defined based on the results of the VBE trials and could permit widening of dissolution specifications. 2.3.4 Regulatory Perspective This PBBM applied a mechanistic approach to in vivo drug pH-solubility profiles with consideration for common chloride ion effects and precipitation. However, the adjusted solubility profiles focused only on the most impacted GI tract regions, i.e., the stomach and colon, to limit the model complexity. As precipitation is a key consideration for this model, experimental data are recommended to support the assumption for regulatory submissions. Validation of the model based on single simulations was considered adequate, but some concerns were noted for the population simulations and VBE trials. Regulators noted that the variability of the virtual subjects for the population simulations was not fully representative of that observed in clinical trials, as probability contours covered the observed variability at a 95% prediction interval in only 5 of 10 trials. Further, virtual trial simulations were unsuccessful for some subjects due to the model complexity and software limitations. The predictive ability of the model for the nonbioequivalent tablet variant was also questioned as 1 out of 10 trials did not meet the bioequivalence criteria. The overall assessment takes into account the model risk, which was considered low per the credibility assessment framework. The defined safe-space was considered adequate to permit widening of dissolution specifications, considering a margin of error in view of the simulation results obtained for the nonbioequivalent tablet variant. 2.4 Case Study 2: Justification of Dissolution Specification for Lesinurad. Anders Lindahl (Swedish Medical Products Agency) and Flora Musuamba Tshinanu (Federal Agency for Medicines and Health Products, Belgium) 2.4.1 Background The modeling work for this product has been described previously in 2016, making this one of the first published PBBM with regulatory implication. Lesinurad is a selective uric acid reabsorption inhibitor, administered orally as an IR tablet (Zurampic 200 and 400 mg) for treatment of hyperuricemia associated with gout. Lesinurad, a weak acid with a p K a of 3.2, has low solubility at low pH values, high solubility at pH values above pH 5, and a high intestinal permeability, i.e, BCS 2. During the marketing application procedure, an in silico PBBM was submitted to FDA in support of the proposed in vitro specification of Q = 80% in 30 min. The PBBM was not submitted to the European Medicines Agency (EMA) throughout the marketing authorization application (MAA) procedure. Of note, the in vitro dissolution specification limit, Q = 80% at 30 min, was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. In this scenario, where the model is only descriptive and the key decision is taken based on other data, the regulatory impact of the model is considered low. However, the model assessment exercise was performed irrespective of this consideration in the context of the preparation to the workshop, and several issues were identified. 2.4.2 Model Development, Validation, and Application The modeling platform was GastroPlus (Version 9.0.0, Simulations Plus Inc., Lancaster, CA). Individual PK data was obtained from a clinical bioavailability study, including a 15 min IV infusion microtracer dose of 0.1 mg ( 14 C lesinurad) and an oral dose of 400 mg of lesinurad, in 12 subjects. While IV data were used to estimate disposition parameters (volumes of distribution and clearances), the oral PK profiles obtained in the same subjects at 400 mg dose were used to calculate individual gastric emptying patterns and optimize the individual P eff data. Thus, a top-down data driven approach was used to create individual models with subject-specific gastric emptying rates (lag time) and P eff . From the EMA perspective, a bottom-up approach would have been preferred for characterization of P eff . The % default values for standard volume occupation by water in the small intestine and colon (40% and 10%, respectively) were reduced to 7.5% and 2%, respectively, with reference to Schiller et al. In vitro dissolution data were fitted to a P-PSD that would match observed in vitro dissolution per batch using the quality control method. The obtained P-PSD was then used as the input in GastroPlus. Moreover, the formulation was switched to a delayed release enteric coated tablet in the model in GastroPlus to ensure no release in the stomach. Finally, to be able to fit the model to the individual PK profiles, it was, according to the modeling report, necessary to reduce the dose for the nonbioequivalent batch in the GastroPlus platform to compensate for the lower PK exposures observed in the clinical study comparing the nonbioequivalent batch to the pivotal batch used in the model building. The dose was reduced to 352 mg in the model instead of the 400 mg that was dosed in the clinical study, and the sponsor concluded that the model could adequately predict the C max ratio between the two batches. These could be considered manual manipulations in the context of the bottom-up data driven approach that can be questioned given the limited amount of clinical data available and given the absence of convincing justification in the documentation submitted by the applicant. From an EMA regulatory point of view, this approach would not have been acceptable for higher regulatory impact applications. PSA was performed for each subject and each batch for P eff , P-PSD, and solubility. However, PSA was missing for the formulation switch, change in GI volumes, and gastric emptying time. The intended scenario was simulated with use of a virtual population ( n = 25) based on the subjects included in the model building and a product batch with an in vitro dissolution similar to the suggested specification limit. Between-subject variability was randomly introduced (within the observed ranges) for gastric emptying and gastric pH. However, no within-subject variability was simulated as part of the sensitivity analysis. Predicted intervals from simulated trials were tighter than those observed in clinical studies. The sponsor concludes that bioequivalence is expected for a batch with product specification limit Q = 80% at 30 min, based on the PBBM. This conclusion is not shared by the EMA regulators given the identified caveats of the model. Instead, as mentioned above, the suggested in vitro dissolution specification for drug product was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. 2.4.3 Regulatory Perspective In summary, the EMA regulators identified issues with uncertainties in P eff and gastric emptying (fitted values), fluid volumes in the GI tract, formulation switch, manually adjusting the dose during model verification, and lower variability in the simulated virtual population compared to in the clinical studies. The model would not have been accepted to justify an extended in vitro dissolution safe space beyond the Q = 80% in 30 min, if this was requested, because it would then be considered a medium to high regulatory impact. In these cases, the described issues would have been considered critical. In order to illustrate the decision-making process from the initial question to the final answer regarding the model acceptance, the EMA regulatory assessors have filled the credibility assessment matrix for the case of lesinurad as shown in . , At the EMA, filling the credibility matrix is considered good practice in regulatory submissions including modeling and simulation with medium and high regulatory impact applications. In this case, the matrix was filled for lesinurad for an illustrative purpose only. 2.5 Case Study 3: Justification of Formulation Bioequivalence Despite Differences in Dissolution for Acalabrutinib Capsules. Rebecca Moody (FDA) 2.5.1 Background AstraZeneca submitted a PBBM case study based on publicly available data from several publications on acalabrutinib capsules. , , Acalabrutinib is a BCS Class II weak diprotic base drug substance formulated as a 100 mg IR capsule for the treatment of adult patients with mantle cell lymphoma who have received at least one prior therapy. The purpose of the submitted PBBM was to evaluate if differences in the in vitro dissolution between two drug product batches had an impact on the in vivo absorption, measured via PK end points. Specifically, during product development, two batches (W026394 and L0505009) had similar dissolution profiles in low pH media (pH 1) but had different dissolution profiles in pH 4.5 Acetate Buffer and FaSSIF media as assessed by the similarity factor (f2). It is noted that both batches were dosed in clinical trials in parallel studies with adequate outcomes. 2.5.2 Model Development, Validation, and Application In summary, the PBBM strategy involved modeling of individual subject PK data and then validating whether that population was able to reproduce the observed mean C max and AUC from several different clinical scenarios. Individual models were constructed via top-down analysis for an 8-subject population for which microdose IV and oral administration capsule PK data were available. In building the oral absorption model, gut V max for CYP3A4 was individually fitted based on oral PK profiles, and a subject-specific gastric retention time was added to account for observed lag times. In vitro dissolution was incorporated into the model mechanistically through the P-PSD approach. In the discussion of the P-PSD approach, it was noted that an appropriate number of in vitro dissolution data points are useful for fitting (i.e., to capture the full profile), that the fewest number of bins should be used for fitting, and that the prediction ability of the fitted P-PSD needs to be validated in several pH media to be considered acceptable. Ideally, a well-structured framework is in place prior to extracting the P-PSD and identifies the dissolution media to be used for P-PSD extraction (and why), the optimization process for fitting and reducing the number of bins, and the steps for validation. For the acalabrutinib case study, however, a P-PSD was extracted for 4 different drug product batches using different dissolution conditions (i.e., pH 1 for Phase 1 capsules vs pH 6.8 for batches representative of commercial capsules) and the number of bins (i.e., 10) was not fully justified. For the PSA, several physiological and drug related parameters were varied to assess their impact on acalabrutinib exposure for one subject of the 8-subject population. The one subject was selected to be representative of the population based on their total clearance, volume of distribution, and gut CYP3A4 V max . From the PSA, it was clear that there are relevant differences in acalabrutinib exposure ( C max and AUC) due to several parameters; however, only CYP3A4 V max and gastric residence time were incorporated into the model with individual fitting for both parameters. Other parameters, such as P eff , were assumed to be constant across the population without sufficient justification. In addition, the model would benefit from clarity regarding the ranges of the parameters tested and whether they are representative of the ranges expected in the greater population. Addressing the uncertainty regarding input parameters and the potential clinical relevance of those uncertainties to assess the model consequence and reliability would be useful. The model was validated by evaluating the accuracy of the 8-subject population in simulating acalabrutinib exposure from 16 different clinical scenarios. The model predicted the C max and AUC ratios between test (W026394) and reference (L0505009) batches were close to 1.0, and the 90% confidence intervals were comprised between the bioequivalence (BE) limits of 0.8–1.25. 2.5.3 Regulatory Perspective Overall, considering the totality of evidence, the risk of bioinequivalence for drug product batches W026394 and L0505009 due to dissimilar dissolution at high pH (i.e., pH 4.5 and above) was low. However, the application of PBBM for future use is considered limited due to uncertainties. Specifically, questions remain concerning the use of an 8-subject data set as representative of the wider population (without being able to capture within-subject variability) and the selection of fitted parameters without appropriate justification. To support future application of the PBBM, additional data from clinical studies involving DDI could support assumptions regarding CYP3A4 V max . It could also be beneficial to incorporate power and sample size calculations based on the observed variabilities from population studies so that the model would have greater utility and wider generalizability. As a future discussion point for the modeling community, there were concerns and unknown consequences from health authorities on the topic of model multiplicity. There were at least 3 Acalabrutinib PBBMs highlighted in this case study: (1) the GastroPlus model submitted for regulatory approval to the U.S. FDA, (2) the GastroPlus model described in peer-reviewed publications, and (3) the model developed in Simcyp. This adds an additional layer of complexity, as slight differences were noted between each model. Where is the boundary for “fit for purpose”? In an ideal world, would there be one model for one drug product, one model that would be used throughout the drug product’s entire lifecycle for all purposes (e.g., DDIs, postapproval changes, biowaivers, etc.)? 2.6 Case Study 9: A Retrospective Case Study on Fluconazole. Øyvind Holte (Norwegian Medical Products Agency) 2.6.1 Background The data included in this case study was selected from a wide body of data that exists for fluconazole–different strengths of tablets and capsules, oral solution, and also an intravenous formulation. The results of several clinical PK studies, performed between 1983 and 2019, were available for development and verification of the model. The company investigated whether PBBM could demonstrate bioequivalence between the various drug products despite significantly differing dissolution profiles and whether a validated PBBM approach could provide the ability to establish a dissolution safe space for bioequivalence. 2.6.2 Model Development, Validation, and Application IV formulation data were used to confirm the clearance and volume of distribution for fluconazole, readily available from the literature. Second, GI absorption of fluconazole was estimated based on the exposure following dosing of oral solutions (two concentrations). Finally, oral solid dose formulations (tablets and hard capsules) were included in the model, supported by the in vitro dissolution performance (Weibull parametrization or the Johnson model–particle size distribution of fluconazole). A total of 17 simulations were performed to develop the model. Separate data sets were used for model development and model validation. The model concluded that there is no significant food effect for the oral hard capsules. Likewise, fluconazole PK is not affected by the concomitant intake of antacid. The model was further used to predict the bioequivalence ( C max and AUC) of a series of oral solid formulations exhibiting a range of in vitro dissolution rates. Compared to a commercial formulation, some of these formulations had dissolution profiles that were clearly not “similar” based on the f2 algorithm. In other words, these dissolution data would typically not be accepted to support a BCS-based biowaiver. The model predicted that some of these formulations were bioequivalent, regardless of an f2 < 50. The formulations with the slowest dissolution rate were predicted by the model to be nonbioequivalent. These results were used to justify a possible widening of the acceptance criteria for the dissolution test. VBE trials were ultimately performed to replicate the results of the previously conducted PK studies. Furthermore, VBE trials were used to establish the appropriate dissolution criteria, based on virtual batches having dissolution profiles between an unacceptable (slow) batch and the slowest among the acceptable clinical batches. Based on the VBE trials, a suitable acceptance criterion is NLT 80% dissolved in 75 min. This is substantially wider than the current acceptance criterion at 30 min, which is normal for an immediate-release drug product. 2.6.3 Regulatory Perspective It is acknowledged that for the purpose of this case study all relevant details were not available. The clinical trials were conducted without any intent of supporting PK modeling, and certain drug product details relevant to modeling are not available. Based on the data presented for this case study, the regulators had many questions regarding the conclusions made by the company. There are uncertainties regarding the model’s ability to predict the PK of fluconazole. VBE trials were performed, based on the model, to recapitulate the observed results from the available BE studies. However, certain assumptions made by the company were in question, and the conclusions made based on the VBE were not the same conclusions as found by the various regulatory authorities. In conclusion, based on the data provided with the case study, the PBBM represents limited value and would probably not be considered sufficient as a substitute for clinical data in a regulatory setting. The company’s conclusions, which were supported by the model predictions, would normally use a bioequivalence study approach (in the absence of modeling). From a patient safety perspective, future batches of a drug product should not differ significantly from the batches used in a pivotal clinical trial. Therefore, wide dissolution rate acceptance criteria are normally not acceptable. A large batch-to-batch variation could indicate nonbioequivalence. It is acknowledged that, for certain drug products, the in vitro dissolution rate may not be directly related to clinical efficacy and safety, and relatively large differences can be acceptable. PBBM is well suited to support such decisions. The model development presented with this case study is based on a substantial amount of clinical data–more than what can be expected for a new drug product under development. Still, the data have certain deficiencies. As indicated above, the clinical trials were not planned and conducted with the development of a PBBM in mind. For example, detailed information regarding the PSD was not available for all of the batches, and this model input parameter was therefore assumed or estimated. Also, the conditions used for dissolution testing were not the same for all of the drug products: A higher paddle rotation speed can lead to a faster dissolution rate. This makes the head-to-head comparison of the various dissolution results and their use as model input difficult. For a bottom-up modeling approach, such uncertainties reduce the credibility of the model predictions. Apparently, no sensitivity analysis was performed during the model development. Several of the simulations overestimated the C max and/or the AUC, and no efforts were made to adjust or correct the initial model based on these observations. Although the model predicted no significant effect of concomitant antacid or food intake, the confidence in such results is reduced by the underlying uncertainty of each model estimation. In conclusion, it is believed that the presented PBBM would not be accepted as a substitute for BE trials to support a marketing authorization. However, the concerns indicated above would possibly be resolved during an application procedure. 2.7 Panel Discussion The panel discussion brought together the following regulators from multiple health authorities: Rebecca Moody (FDA), Luiza Borges (ANVISA), Maria Malamatari (MHRA), Øyvind Holte (Norwegian Medical Products Agency), Shereeni Veerasingham (Health Canada), and Shinichi Kijima (PMDA). The moderators were Paul Seo (FDA) and Sumit Arora (Janssen). The panel members were asked a series of questions regarding model parametrization. 2.7.1 Q1: What Is Your Opinion on the Use of Fitted Parameters versus Generated Data. In Particular, What Level of Fitting/Extrapolation Would Be Acceptable? Øyvind Holte (Norwegian MPA) pointed out that, if model input parameters are fitted, it would be useful for them to be constant during model verification and validation where relevant. The model verification would in fact highlight whether the assumptions made or the model parameters that were fitted are correct (or not). For example, when dissolution data are introduced in a PBBM with a mechanistic model such as the Z-factor or P-PSD, the adequacy of the Z-factor or P-PSD should be verified in vitro by checking if the dissolution of the same batch obtained using different methodologies can be adequately predicted. This step should be made on several drug product batches of the same formulation and process to verify the dissolution model adequacy prior to its introduction in the PBBM. The panelists expressed the need for more data to demonstrate how the P-PSD works. Xavier Pepin (Simulations Plus, Inc.) responded that the P-PSD represents the surface of drug substance available in the drug product for dissolution and a measurement of this surface area with an orthogonal technique could be difficult (See ). Ultimately, the P-PSD validation in vitro and in vivo in different conditions of the GI tract demonstrates its usability, as was suggested by the panelists. 2.7.2 Q2: How Important Is the Model Contribution to the Regulatory Decision for Quality Aspects of Drug Development, Submission, and Postapproval Changes? Kuemmel et al. have developed a credibility assessment framework applicable to model informed drug development which defines a model influence, i.e., whether there exist additional data to support the question that the model tries to answer, and the decision consequence, i.e., the potential consequences to the patients if the decision supported by the model would be wrong. Both model influence and decision consequences can be used to assess the risk of the PBBM. Shereeni Veerasingham (HC) stated that there is no current guideline in Canada regarding the development, validation, and use of PBBM. A case-by-case approach is employed, and the totality of the data submitted to support the file application is used to guide the decision. Luiza Borges (ANVISA) pointed out that, for ANVISA, the PBBM is evaluated in terms of proposed application, development, and validation. The identification of the most influential model parameters is key. The data sets used for model validation are also examined for relevance. Uncertain parameters that are fitted would be expected to be highlighted. Finally, the totality of the relevant data provided for the model application is then considered for the evaluation. Shinichi Kijima (PMDA) indicated that a few submissions to PMDA were reviewed using a quality decision making process, and PMDA’s cross functional team was involved in those reviews. Rebecca Moody (FDA) stated that FDA typically reviews submissions of PBBMs with an interdisciplinary approach. The aim of the review is to understand the risks to the patient and what the model indicates in terms of product quality variations. Like other agencies, the totality of the data is considered to support the decision. Øyvind Holte (Norwegian MPA) indicated that the number of PBBM cases reviewed by EMA is currently less than 5 and that EMA is therefore relatively new to this type of submission. It was also recommended to contact EMA in advance, if the intent of the PBBM is to waive a clinical evaluation, to set respective expectations, to agree on a process, and to organize the right review team. 2.7.3 Q3: What Is the Level of Parameter Justification Expected for a PBBM? Panelists indicated that, whether parameters originate from experiments or fitted to other sources of data, it is useful for the measuring methods to be standard and well described. Fitting parameters within an acceptable range is not prohibited; however, justification with adequate scientific references would be helpful. 2.7.4 Q4: Are Virtual Bioequivalence Studies Acceptable? Panelists indicated that they see that the number of VBE studies in PBBM submissions is growing. Since this is a clear direction that industry is taking, the panelists suggested that the populations included in VBE studies should be wider. In addition, the within-subject variability should be present, ideally using mechanistic models and compared to that observed in the clinic as much as possible. The virtual studies would be expected to reproduce the observed variability. 2.7.5 Q5: Are There Any Other Expectations in Terms of the Content and Format for Submitted PBBMs? Panelists mentioned that visualization of the whole modeling strategy is very important, in addition to the assumptions made and their verification. Panelists also expressed the desire to see the model development history, i.e., why certain changes were made from default values, their magnitude, and how it impacted the model outcome. The industry participants believe that a report template could be useful for both regulators and industry to set expectations for future submissions. It would be important to include some details in each section to describe data expectations, with some examples. A template will be proposed by industry experts as a separate article. Introduction to the Workshop. Bhagwant Rege (FDA) FDA’s Office of Pharmaceutical Quality believes that everyone deserves to have confidence in their next dose of medicine and that pharmaceutical quality ensures the availability, safety, and efficacy of every dose. Biopharmaceutics is the link between DP quality and clinical performance in the patient. Patient centric quality standards (PCQSs) ensure that the DP consistently delivers clinical performance to the patient as described on the label in terms of safety and efficacy over its shelf life and from batch to batch. PCQSs can provide additional flexibility to pharmaceutical manufacturers while maintaining quality by establishing acceptance criteria based on clinical performance rather than process capability or manufacturing process control. PCQSs also avoid under- or overdiscriminating specifications which are not in the patient’s interests. The main obstacle to establishing PCQSs is a weak or often missing link between the in vitro and in vivo performances of the DPs. PBBM can help to overcome this obstacle. PBBM is a subset of Physiologically Based Pharmacokinetic (PBPK) models that are specific for biopharmaceutics applications. PBBM has more than 10 years of regulatory history. PBBM is mechanistic by nature because it integrates physicochemical properties of the drug, drug substance (DS), DP, the formulation composition, the route of administration, and the gastrointestinal (GI) physiology to predict in vivo exposures. PBBM can provide the crucial link between in vitro and in vivo performance of drug products to establish PCQS, which includes the dissolution method and acceptance criteria, dissolution safe space, and specifications for critical bioavailability attributes such as particle size distribution, polymorphism or crystalline content, granule properties, and manufacturing process parameters. PBBM can also provide supportive evidence for biowaivers including the biopharmaceutics classification system (BCS) based biowaivers and additional strength waivers as well as scientific bridging for 505(b)(2) products. FDA has cosponsored two workshops on PBBM in 2017 − and 2019. FDA also published the draft guidance on the use of PBPK analyses for biopharmaceutics applications in 2020. Currently, global regulatory acceptance of PBBM has some challenges. They include lack of the prospective PBBM strategy leading to inadequate model input, validation, and biologically implausible optimizations to fit model predictions to clinical data. A primary objective of this workshop was to discuss best practices on PBBM with respect to model input (in vitro and in vivo), model validation, and model applications; discuss new areas of PBBM applications such as generics and modified release (MR) products; and finally explore the areas of agreement between the industry and regulators for the future harmonization efforts. Keynote Speech: PBBM: Impact and Future Perspective. Jennifer Dressman Prof. Jennifer Dressman kicked off the conference with a plenary lecture on the current status of PBBM for various routes of administration. She highlighted that the physiology at the given site of administration should be adequately captured and that release tests must be tailored to the specific site of application, as well as the dosage form applied. Modeling is then required to bring both of these aspects together and translate the results into a prediction of plasma and/or local concentration profiles. For modeling systemic levels, it is highly recommended to start with the disposition kinetics and compare the model against clinical intravenous (IV) data whenever possible. Probably, the most advanced PBBMs are those for oral drug delivery. Much data exists for the physiology of the GI tract, and quite sophisticated models are already available in the most frequently used software tools. One area in which we could do better is the modeling of GI motility, particularly in the fed state, which may have a large impact on the gastric distribution of the drug and consequently its gastric emptying. In the past few years, there has been a concerted effort across academic institutions to create biopharmaceutical tests which better mimic release from the formulation in the GI tract. − As a result, biorelevant media have largely replaced United State Pharmacopeia (USP) standard buffers as test media in pharmaceutical development. However, the most widely used equipment is still the USP Type 2 (Paddle) apparatus, and it remains to be seen whether other apparatuses can attain the same broad level of acceptance. Likewise, while assessing permeability by running bioavailability studies in animals has been largely replaced by studies in cell lines such as Caco-2 and Madin-Darby canine kidney (MDCK) cells, we still need better models for human permeability. To build a “digital twin”-based population pharmacokinetic (PK) model, the variability in physiology and its ramifications in terms of inter- and intraindividual variability in release rate and permeability must be taken into account. Efforts to mechanistically model both release from different types of dosage forms and drug permeability are already underway and have achieved some success. − Using ibuprofen as a test compound, creation of a robust in silico model to describe its dissolution under various conditions was demonstrated. Further, case examples showcased the joint impact of formulation and food on itraconazole PK and the joint impact of formulation and proton pump inhibitor (PPI) on AstraZeneca development compound PK. , Similar approaches have been used to build PBBMs for other routes of administration. For the dermal route, many different formulation types are available, and the choice of formulation will have a strong impact on the depth of permeation into (and beyond) the skin. The challenge lies in tailoring the release studies to the intended site of drug delivery. Like the GI tract, skin physiology is quite well understood, and the next tasks will be to capture changes in skin physiology with body location, patient age, ethnicity, and disease state. Nevertheless, PBBM has already progressed to the point where virtual bioequivalence (VBE) assessments of topical formulations are starting to gain acceptance at the regulatory level. For long-acting injectables, PBBMs are used to describe simultaneous release and biodegradation of polymeric vehicles, and there are also some recent advances made in biopharmaceutics evaluation e.g., the Dispersion Releaser. In the case of products that are inhaled, biopharmaceutical models include considerations of particle size and shape, with measurements of tissue permeability in the lung frequently being conducted in Calu-3 cells. In summary, PBBM has really picked up the pace in the past few years, and by 2030, it is likely that we will have reliable PBBM across a range of routes of administration. The advantages of PBBM are self-evident–with the physiological “digital twin” approach, we should be able to predict first-in-human levels better, as well as reduce the number and/or size of studies necessary to identify drug–drug interactions (DDI) and food effect interactions. The impact of PBBM will be more biowaivers based on VBE, application to “beyond the rule of five” drugs, and the reduction or even elimination of animal studies in formulation development, which will culminate in more effective medicines becoming available to patients sooner. Case Study 1: A PBBM Based Dissolution Safe-Space for a BCS Class II Drug Substance. Shereeni Veerasingham and Arthur Okumu (HEALTH Canada) 2.3.1 Background PBBM was utilized to establish a dissolution safe-space for an immediate release (IR) tablet from Amgen containing a BCS Class II drug substance. The drug is a weak base, hydrochloride salt with a p K a of approximately 9. Following oral administration of the tablet, the maximum plasma concentration ( C max ) is achieved in approximately 6 h. Administration of the tablet with food increases the rate and extent of drug absorption, with a greater impact observed with a high-fat meal compared to a low-fat meal. The clinical knowledge space includes tablet variants that were evaluated in clinical bioequivalence studies, including a tablet variant that was found to be nonbioequivalent to the target profile. The nonbioequivalent tablet variant had a significantly slower in vitro dissolution profile than the target profile. PBBM based VBE trials were conducted to determine the in vitro dissolution edge of failure for bioequivalence and establish a dissolution safe-space for the tablet. The question of interest was, can the dissolution specification for the oral tablet be widened and still ensure bioequivalent in vivo performance? 2.3.2 Model Development The PBBM used the Advanced Compartmental Absorption and Transit (ACAT) model in GastroPlus (ver. 9.8.3, Simulations Plus Inc., Lancaster, CA). Changes were made to the default ACAT model based on literature research, in vitro data, and clinical observations to optimize simulations for the tablet. The disposition model was developed based on the physicochemical and biopharmaceutical properties and intravenous (IV) and oral PK data from 5 clinical studies. Initial Michaelis–Menten constant ( K m ) and maximum reaction velocity ( V max ) values for CYP3A4 and CYP1A2 were estimated by ADMET predictor (Simulations Plus Inc., Lancaster, CA). Clearance was determined by optimizing the K m and V max values to fit the observed clinical plasma concentration following the IV infusion of the drug at three different doses. During oral absorption model development, the effective permeability ( P eff ) was fitted to PK data for the oral solution obtained under fasting conditions and verified by comparison of the simulation for fed conditions to observed PK data. In addition, the percentage of fluid in the small intestine and colon were updated to 7.5% and 3%, respectively, to reflect values reported in the literature. The PK profile for the oral solution was simulated reasonably well ( , left panel). However, PK simulations for the tablet overpredicted the C max and underpredicted the time to the maximum concentration ( T max ) ( , middle panel). Further model refinement was therefore undertaken, considering that, due to a common ion effect, aqueous solubility of the drug (HCl salt) decreases in the presence of chloride ions. The aqueous solubility of the drug is relatively constant in the range of pH 3.5 to 5.0 and decreases at pH greater than 5.0. The in vivo pH-solubility profile was assumed to vary with formulation (solution or tablet), the volume of water administered with the tablet, and the prandial state. The in vitro and in vivo pH-solubility profiles were calculated using the Henderson–Hasselbalch equation and the estimated in vivo chloride ion concentration at the time of drug administration. Dissolution was assumed to be controlled by the diffusion of the drug through a stagnant film layer surrounding the dissolving particle as described by Pepin et al., 2019. In vitro dissolution rates were fitted to a theoretical product particle size distribution (P-PSD) and were validated by using P-PSD to predict dissolution at different pHs. The predicted dissolution profiles matched the measured profiles at pH 1.3, 2.0, and 4.5. However, at pH 6.8, the P-PSD and bulk pH/solubility overpredicted the dissolution rate. Using surface pH/solubility at pH 6.8 improved the prediction but resulted in a modest underprediction compared with the measured profile. The P-PSD values were used as input to simulate in vivo dissolution for the ACAT model. Due to the pH profile in the GI tract, supersaturation of the drug can occur, leading to precipitation. A mechanistic model based on classical nucleation theory was used to account for differences in the nucleation and growth rates for the oral solution and the tablet. Further, for the tablet simulations, the pH in the ascending colon was reduced from pH 6.8 to 4.86 based on the pH value obtained from an in vitro experiment. The reduction in pH accounts for microenvironmental pH effects of undissolved drug in the ascending colon, and the longer residence time and low chloride concentration are expected to allow for further drug dissolution and absorption. Simulation for tablet following refinement of the model indicated a good fit to the observed profile ( , right panel). 2.3.3 Model Validation and Application Model validation employed data sets that were independent from those used in model development and included a data set for a different formulation. The validation was based on single simulation comparisons to the observed PK profiles from three clinical studies. Additional validation included comparisons of simulations to PK profiles obtained from a food effect study (low-fat and high-fat meals) and a DDI study using ketoconazole as the perpetrator. Prespecified acceptance criteria were met for most studies, except for area under the concentration versus time curve (AUC) in one PK study (Average Fold Error (AFE): 1.35) and C max for the low-fat, low-calorie simulation (AFE: 1.27). Overall, the model validation was considered adequate for the intended use of the model to determine a dissolution safe space. Parameter sensitivity analysis (PSA) identified CYP3A4 metabolism kinetics, small intestine transit times, small intestine and colon fluid volumes, and ascending colon pH as key parameters with an impact on C max and exposure, assessed as the AUC. Prior to model application, the ability of the population simulation to capture the observed intersubject PK variability was evaluated. Parameters identified by the PSA as influential parameters were adjusted to account for intersubject and intrasubject differences. The simulated probability contours of the plasma concentration time profile across 10 population simulation trials mimicked the range of variability observed between subjects in the clinical data set. Conservative criteria for bioequivalence were set with a requirement that all trials (10 out of 10) needed to meet the bioequivalence criteria of the 90% confidence interval of the ratio of the test to reference C max and AUC within 80–125%. The ability of VBE trials to simulate observed clinical results was evaluated by using a tablet variant that was not bioequivalent to the target profile. The bioequivalence criteria were not met for 1 of 10 virtual trials, indicating agreement in the conclusions of the virtual trials and clinical studies. To define a safe-space, theoretical dissolution profiles were generated by altering the Weibull Ph1 fraction (f1). As f1 decreases, dissolution is slower with an increase in P-PSD, and PK simulations display a correspondingly lower C max . Simulated PK for the theoretical profiles was then compared to that of the reference tablet in VBE trials. Of note, model complexity and software limitations led to unsuccessful trial simulations for some subjects (simulations did not run to completion). Of 42 virtual subjects included in the trial, only the first 32 completed subjects for the reference formulation and corresponding subject simulation for the test formulation were used for the analysis. For the slowest f1 profile (f1-slow), 1 of 10 virtual trials did not meet bioequivalence criteria, with a C max ratio 90% CI < 80%. All f1 profiles faster than f1-slow were bioequivalent to the reference tablet. A dissolution safe space was defined based on the results of the VBE trials and could permit widening of dissolution specifications. 2.3.4 Regulatory Perspective This PBBM applied a mechanistic approach to in vivo drug pH-solubility profiles with consideration for common chloride ion effects and precipitation. However, the adjusted solubility profiles focused only on the most impacted GI tract regions, i.e., the stomach and colon, to limit the model complexity. As precipitation is a key consideration for this model, experimental data are recommended to support the assumption for regulatory submissions. Validation of the model based on single simulations was considered adequate, but some concerns were noted for the population simulations and VBE trials. Regulators noted that the variability of the virtual subjects for the population simulations was not fully representative of that observed in clinical trials, as probability contours covered the observed variability at a 95% prediction interval in only 5 of 10 trials. Further, virtual trial simulations were unsuccessful for some subjects due to the model complexity and software limitations. The predictive ability of the model for the nonbioequivalent tablet variant was also questioned as 1 out of 10 trials did not meet the bioequivalence criteria. The overall assessment takes into account the model risk, which was considered low per the credibility assessment framework. The defined safe-space was considered adequate to permit widening of dissolution specifications, considering a margin of error in view of the simulation results obtained for the nonbioequivalent tablet variant. Background PBBM was utilized to establish a dissolution safe-space for an immediate release (IR) tablet from Amgen containing a BCS Class II drug substance. The drug is a weak base, hydrochloride salt with a p K a of approximately 9. Following oral administration of the tablet, the maximum plasma concentration ( C max ) is achieved in approximately 6 h. Administration of the tablet with food increases the rate and extent of drug absorption, with a greater impact observed with a high-fat meal compared to a low-fat meal. The clinical knowledge space includes tablet variants that were evaluated in clinical bioequivalence studies, including a tablet variant that was found to be nonbioequivalent to the target profile. The nonbioequivalent tablet variant had a significantly slower in vitro dissolution profile than the target profile. PBBM based VBE trials were conducted to determine the in vitro dissolution edge of failure for bioequivalence and establish a dissolution safe-space for the tablet. The question of interest was, can the dissolution specification for the oral tablet be widened and still ensure bioequivalent in vivo performance? Model Development The PBBM used the Advanced Compartmental Absorption and Transit (ACAT) model in GastroPlus (ver. 9.8.3, Simulations Plus Inc., Lancaster, CA). Changes were made to the default ACAT model based on literature research, in vitro data, and clinical observations to optimize simulations for the tablet. The disposition model was developed based on the physicochemical and biopharmaceutical properties and intravenous (IV) and oral PK data from 5 clinical studies. Initial Michaelis–Menten constant ( K m ) and maximum reaction velocity ( V max ) values for CYP3A4 and CYP1A2 were estimated by ADMET predictor (Simulations Plus Inc., Lancaster, CA). Clearance was determined by optimizing the K m and V max values to fit the observed clinical plasma concentration following the IV infusion of the drug at three different doses. During oral absorption model development, the effective permeability ( P eff ) was fitted to PK data for the oral solution obtained under fasting conditions and verified by comparison of the simulation for fed conditions to observed PK data. In addition, the percentage of fluid in the small intestine and colon were updated to 7.5% and 3%, respectively, to reflect values reported in the literature. The PK profile for the oral solution was simulated reasonably well ( , left panel). However, PK simulations for the tablet overpredicted the C max and underpredicted the time to the maximum concentration ( T max ) ( , middle panel). Further model refinement was therefore undertaken, considering that, due to a common ion effect, aqueous solubility of the drug (HCl salt) decreases in the presence of chloride ions. The aqueous solubility of the drug is relatively constant in the range of pH 3.5 to 5.0 and decreases at pH greater than 5.0. The in vivo pH-solubility profile was assumed to vary with formulation (solution or tablet), the volume of water administered with the tablet, and the prandial state. The in vitro and in vivo pH-solubility profiles were calculated using the Henderson–Hasselbalch equation and the estimated in vivo chloride ion concentration at the time of drug administration. Dissolution was assumed to be controlled by the diffusion of the drug through a stagnant film layer surrounding the dissolving particle as described by Pepin et al., 2019. In vitro dissolution rates were fitted to a theoretical product particle size distribution (P-PSD) and were validated by using P-PSD to predict dissolution at different pHs. The predicted dissolution profiles matched the measured profiles at pH 1.3, 2.0, and 4.5. However, at pH 6.8, the P-PSD and bulk pH/solubility overpredicted the dissolution rate. Using surface pH/solubility at pH 6.8 improved the prediction but resulted in a modest underprediction compared with the measured profile. The P-PSD values were used as input to simulate in vivo dissolution for the ACAT model. Due to the pH profile in the GI tract, supersaturation of the drug can occur, leading to precipitation. A mechanistic model based on classical nucleation theory was used to account for differences in the nucleation and growth rates for the oral solution and the tablet. Further, for the tablet simulations, the pH in the ascending colon was reduced from pH 6.8 to 4.86 based on the pH value obtained from an in vitro experiment. The reduction in pH accounts for microenvironmental pH effects of undissolved drug in the ascending colon, and the longer residence time and low chloride concentration are expected to allow for further drug dissolution and absorption. Simulation for tablet following refinement of the model indicated a good fit to the observed profile ( , right panel). Model Validation and Application Model validation employed data sets that were independent from those used in model development and included a data set for a different formulation. The validation was based on single simulation comparisons to the observed PK profiles from three clinical studies. Additional validation included comparisons of simulations to PK profiles obtained from a food effect study (low-fat and high-fat meals) and a DDI study using ketoconazole as the perpetrator. Prespecified acceptance criteria were met for most studies, except for area under the concentration versus time curve (AUC) in one PK study (Average Fold Error (AFE): 1.35) and C max for the low-fat, low-calorie simulation (AFE: 1.27). Overall, the model validation was considered adequate for the intended use of the model to determine a dissolution safe space. Parameter sensitivity analysis (PSA) identified CYP3A4 metabolism kinetics, small intestine transit times, small intestine and colon fluid volumes, and ascending colon pH as key parameters with an impact on C max and exposure, assessed as the AUC. Prior to model application, the ability of the population simulation to capture the observed intersubject PK variability was evaluated. Parameters identified by the PSA as influential parameters were adjusted to account for intersubject and intrasubject differences. The simulated probability contours of the plasma concentration time profile across 10 population simulation trials mimicked the range of variability observed between subjects in the clinical data set. Conservative criteria for bioequivalence were set with a requirement that all trials (10 out of 10) needed to meet the bioequivalence criteria of the 90% confidence interval of the ratio of the test to reference C max and AUC within 80–125%. The ability of VBE trials to simulate observed clinical results was evaluated by using a tablet variant that was not bioequivalent to the target profile. The bioequivalence criteria were not met for 1 of 10 virtual trials, indicating agreement in the conclusions of the virtual trials and clinical studies. To define a safe-space, theoretical dissolution profiles were generated by altering the Weibull Ph1 fraction (f1). As f1 decreases, dissolution is slower with an increase in P-PSD, and PK simulations display a correspondingly lower C max . Simulated PK for the theoretical profiles was then compared to that of the reference tablet in VBE trials. Of note, model complexity and software limitations led to unsuccessful trial simulations for some subjects (simulations did not run to completion). Of 42 virtual subjects included in the trial, only the first 32 completed subjects for the reference formulation and corresponding subject simulation for the test formulation were used for the analysis. For the slowest f1 profile (f1-slow), 1 of 10 virtual trials did not meet bioequivalence criteria, with a C max ratio 90% CI < 80%. All f1 profiles faster than f1-slow were bioequivalent to the reference tablet. A dissolution safe space was defined based on the results of the VBE trials and could permit widening of dissolution specifications. Regulatory Perspective This PBBM applied a mechanistic approach to in vivo drug pH-solubility profiles with consideration for common chloride ion effects and precipitation. However, the adjusted solubility profiles focused only on the most impacted GI tract regions, i.e., the stomach and colon, to limit the model complexity. As precipitation is a key consideration for this model, experimental data are recommended to support the assumption for regulatory submissions. Validation of the model based on single simulations was considered adequate, but some concerns were noted for the population simulations and VBE trials. Regulators noted that the variability of the virtual subjects for the population simulations was not fully representative of that observed in clinical trials, as probability contours covered the observed variability at a 95% prediction interval in only 5 of 10 trials. Further, virtual trial simulations were unsuccessful for some subjects due to the model complexity and software limitations. The predictive ability of the model for the nonbioequivalent tablet variant was also questioned as 1 out of 10 trials did not meet the bioequivalence criteria. The overall assessment takes into account the model risk, which was considered low per the credibility assessment framework. The defined safe-space was considered adequate to permit widening of dissolution specifications, considering a margin of error in view of the simulation results obtained for the nonbioequivalent tablet variant. Case Study 2: Justification of Dissolution Specification for Lesinurad. Anders Lindahl (Swedish Medical Products Agency) and Flora Musuamba Tshinanu (Federal Agency for Medicines and Health Products, Belgium) 2.4.1 Background The modeling work for this product has been described previously in 2016, making this one of the first published PBBM with regulatory implication. Lesinurad is a selective uric acid reabsorption inhibitor, administered orally as an IR tablet (Zurampic 200 and 400 mg) for treatment of hyperuricemia associated with gout. Lesinurad, a weak acid with a p K a of 3.2, has low solubility at low pH values, high solubility at pH values above pH 5, and a high intestinal permeability, i.e, BCS 2. During the marketing application procedure, an in silico PBBM was submitted to FDA in support of the proposed in vitro specification of Q = 80% in 30 min. The PBBM was not submitted to the European Medicines Agency (EMA) throughout the marketing authorization application (MAA) procedure. Of note, the in vitro dissolution specification limit, Q = 80% at 30 min, was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. In this scenario, where the model is only descriptive and the key decision is taken based on other data, the regulatory impact of the model is considered low. However, the model assessment exercise was performed irrespective of this consideration in the context of the preparation to the workshop, and several issues were identified. 2.4.2 Model Development, Validation, and Application The modeling platform was GastroPlus (Version 9.0.0, Simulations Plus Inc., Lancaster, CA). Individual PK data was obtained from a clinical bioavailability study, including a 15 min IV infusion microtracer dose of 0.1 mg ( 14 C lesinurad) and an oral dose of 400 mg of lesinurad, in 12 subjects. While IV data were used to estimate disposition parameters (volumes of distribution and clearances), the oral PK profiles obtained in the same subjects at 400 mg dose were used to calculate individual gastric emptying patterns and optimize the individual P eff data. Thus, a top-down data driven approach was used to create individual models with subject-specific gastric emptying rates (lag time) and P eff . From the EMA perspective, a bottom-up approach would have been preferred for characterization of P eff . The % default values for standard volume occupation by water in the small intestine and colon (40% and 10%, respectively) were reduced to 7.5% and 2%, respectively, with reference to Schiller et al. In vitro dissolution data were fitted to a P-PSD that would match observed in vitro dissolution per batch using the quality control method. The obtained P-PSD was then used as the input in GastroPlus. Moreover, the formulation was switched to a delayed release enteric coated tablet in the model in GastroPlus to ensure no release in the stomach. Finally, to be able to fit the model to the individual PK profiles, it was, according to the modeling report, necessary to reduce the dose for the nonbioequivalent batch in the GastroPlus platform to compensate for the lower PK exposures observed in the clinical study comparing the nonbioequivalent batch to the pivotal batch used in the model building. The dose was reduced to 352 mg in the model instead of the 400 mg that was dosed in the clinical study, and the sponsor concluded that the model could adequately predict the C max ratio between the two batches. These could be considered manual manipulations in the context of the bottom-up data driven approach that can be questioned given the limited amount of clinical data available and given the absence of convincing justification in the documentation submitted by the applicant. From an EMA regulatory point of view, this approach would not have been acceptable for higher regulatory impact applications. PSA was performed for each subject and each batch for P eff , P-PSD, and solubility. However, PSA was missing for the formulation switch, change in GI volumes, and gastric emptying time. The intended scenario was simulated with use of a virtual population ( n = 25) based on the subjects included in the model building and a product batch with an in vitro dissolution similar to the suggested specification limit. Between-subject variability was randomly introduced (within the observed ranges) for gastric emptying and gastric pH. However, no within-subject variability was simulated as part of the sensitivity analysis. Predicted intervals from simulated trials were tighter than those observed in clinical studies. The sponsor concludes that bioequivalence is expected for a batch with product specification limit Q = 80% at 30 min, based on the PBBM. This conclusion is not shared by the EMA regulators given the identified caveats of the model. Instead, as mentioned above, the suggested in vitro dissolution specification for drug product was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. 2.4.3 Regulatory Perspective In summary, the EMA regulators identified issues with uncertainties in P eff and gastric emptying (fitted values), fluid volumes in the GI tract, formulation switch, manually adjusting the dose during model verification, and lower variability in the simulated virtual population compared to in the clinical studies. The model would not have been accepted to justify an extended in vitro dissolution safe space beyond the Q = 80% in 30 min, if this was requested, because it would then be considered a medium to high regulatory impact. In these cases, the described issues would have been considered critical. In order to illustrate the decision-making process from the initial question to the final answer regarding the model acceptance, the EMA regulatory assessors have filled the credibility assessment matrix for the case of lesinurad as shown in . , At the EMA, filling the credibility matrix is considered good practice in regulatory submissions including modeling and simulation with medium and high regulatory impact applications. In this case, the matrix was filled for lesinurad for an illustrative purpose only. Background The modeling work for this product has been described previously in 2016, making this one of the first published PBBM with regulatory implication. Lesinurad is a selective uric acid reabsorption inhibitor, administered orally as an IR tablet (Zurampic 200 and 400 mg) for treatment of hyperuricemia associated with gout. Lesinurad, a weak acid with a p K a of 3.2, has low solubility at low pH values, high solubility at pH values above pH 5, and a high intestinal permeability, i.e, BCS 2. During the marketing application procedure, an in silico PBBM was submitted to FDA in support of the proposed in vitro specification of Q = 80% in 30 min. The PBBM was not submitted to the European Medicines Agency (EMA) throughout the marketing authorization application (MAA) procedure. Of note, the in vitro dissolution specification limit, Q = 80% at 30 min, was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. In this scenario, where the model is only descriptive and the key decision is taken based on other data, the regulatory impact of the model is considered low. However, the model assessment exercise was performed irrespective of this consideration in the context of the preparation to the workshop, and several issues were identified. Model Development, Validation, and Application The modeling platform was GastroPlus (Version 9.0.0, Simulations Plus Inc., Lancaster, CA). Individual PK data was obtained from a clinical bioavailability study, including a 15 min IV infusion microtracer dose of 0.1 mg ( 14 C lesinurad) and an oral dose of 400 mg of lesinurad, in 12 subjects. While IV data were used to estimate disposition parameters (volumes of distribution and clearances), the oral PK profiles obtained in the same subjects at 400 mg dose were used to calculate individual gastric emptying patterns and optimize the individual P eff data. Thus, a top-down data driven approach was used to create individual models with subject-specific gastric emptying rates (lag time) and P eff . From the EMA perspective, a bottom-up approach would have been preferred for characterization of P eff . The % default values for standard volume occupation by water in the small intestine and colon (40% and 10%, respectively) were reduced to 7.5% and 2%, respectively, with reference to Schiller et al. In vitro dissolution data were fitted to a P-PSD that would match observed in vitro dissolution per batch using the quality control method. The obtained P-PSD was then used as the input in GastroPlus. Moreover, the formulation was switched to a delayed release enteric coated tablet in the model in GastroPlus to ensure no release in the stomach. Finally, to be able to fit the model to the individual PK profiles, it was, according to the modeling report, necessary to reduce the dose for the nonbioequivalent batch in the GastroPlus platform to compensate for the lower PK exposures observed in the clinical study comparing the nonbioequivalent batch to the pivotal batch used in the model building. The dose was reduced to 352 mg in the model instead of the 400 mg that was dosed in the clinical study, and the sponsor concluded that the model could adequately predict the C max ratio between the two batches. These could be considered manual manipulations in the context of the bottom-up data driven approach that can be questioned given the limited amount of clinical data available and given the absence of convincing justification in the documentation submitted by the applicant. From an EMA regulatory point of view, this approach would not have been acceptable for higher regulatory impact applications. PSA was performed for each subject and each batch for P eff , P-PSD, and solubility. However, PSA was missing for the formulation switch, change in GI volumes, and gastric emptying time. The intended scenario was simulated with use of a virtual population ( n = 25) based on the subjects included in the model building and a product batch with an in vitro dissolution similar to the suggested specification limit. Between-subject variability was randomly introduced (within the observed ranges) for gastric emptying and gastric pH. However, no within-subject variability was simulated as part of the sensitivity analysis. Predicted intervals from simulated trials were tighter than those observed in clinical studies. The sponsor concludes that bioequivalence is expected for a batch with product specification limit Q = 80% at 30 min, based on the PBBM. This conclusion is not shared by the EMA regulators given the identified caveats of the model. Instead, as mentioned above, the suggested in vitro dissolution specification for drug product was accepted based on the in vitro dissolution of several pivotal batches and two nonbioequivalent batches. Regulatory Perspective In summary, the EMA regulators identified issues with uncertainties in P eff and gastric emptying (fitted values), fluid volumes in the GI tract, formulation switch, manually adjusting the dose during model verification, and lower variability in the simulated virtual population compared to in the clinical studies. The model would not have been accepted to justify an extended in vitro dissolution safe space beyond the Q = 80% in 30 min, if this was requested, because it would then be considered a medium to high regulatory impact. In these cases, the described issues would have been considered critical. In order to illustrate the decision-making process from the initial question to the final answer regarding the model acceptance, the EMA regulatory assessors have filled the credibility assessment matrix for the case of lesinurad as shown in . , At the EMA, filling the credibility matrix is considered good practice in regulatory submissions including modeling and simulation with medium and high regulatory impact applications. In this case, the matrix was filled for lesinurad for an illustrative purpose only. Case Study 3: Justification of Formulation Bioequivalence Despite Differences in Dissolution for Acalabrutinib Capsules. Rebecca Moody (FDA) 2.5.1 Background AstraZeneca submitted a PBBM case study based on publicly available data from several publications on acalabrutinib capsules. , , Acalabrutinib is a BCS Class II weak diprotic base drug substance formulated as a 100 mg IR capsule for the treatment of adult patients with mantle cell lymphoma who have received at least one prior therapy. The purpose of the submitted PBBM was to evaluate if differences in the in vitro dissolution between two drug product batches had an impact on the in vivo absorption, measured via PK end points. Specifically, during product development, two batches (W026394 and L0505009) had similar dissolution profiles in low pH media (pH 1) but had different dissolution profiles in pH 4.5 Acetate Buffer and FaSSIF media as assessed by the similarity factor (f2). It is noted that both batches were dosed in clinical trials in parallel studies with adequate outcomes. 2.5.2 Model Development, Validation, and Application In summary, the PBBM strategy involved modeling of individual subject PK data and then validating whether that population was able to reproduce the observed mean C max and AUC from several different clinical scenarios. Individual models were constructed via top-down analysis for an 8-subject population for which microdose IV and oral administration capsule PK data were available. In building the oral absorption model, gut V max for CYP3A4 was individually fitted based on oral PK profiles, and a subject-specific gastric retention time was added to account for observed lag times. In vitro dissolution was incorporated into the model mechanistically through the P-PSD approach. In the discussion of the P-PSD approach, it was noted that an appropriate number of in vitro dissolution data points are useful for fitting (i.e., to capture the full profile), that the fewest number of bins should be used for fitting, and that the prediction ability of the fitted P-PSD needs to be validated in several pH media to be considered acceptable. Ideally, a well-structured framework is in place prior to extracting the P-PSD and identifies the dissolution media to be used for P-PSD extraction (and why), the optimization process for fitting and reducing the number of bins, and the steps for validation. For the acalabrutinib case study, however, a P-PSD was extracted for 4 different drug product batches using different dissolution conditions (i.e., pH 1 for Phase 1 capsules vs pH 6.8 for batches representative of commercial capsules) and the number of bins (i.e., 10) was not fully justified. For the PSA, several physiological and drug related parameters were varied to assess their impact on acalabrutinib exposure for one subject of the 8-subject population. The one subject was selected to be representative of the population based on their total clearance, volume of distribution, and gut CYP3A4 V max . From the PSA, it was clear that there are relevant differences in acalabrutinib exposure ( C max and AUC) due to several parameters; however, only CYP3A4 V max and gastric residence time were incorporated into the model with individual fitting for both parameters. Other parameters, such as P eff , were assumed to be constant across the population without sufficient justification. In addition, the model would benefit from clarity regarding the ranges of the parameters tested and whether they are representative of the ranges expected in the greater population. Addressing the uncertainty regarding input parameters and the potential clinical relevance of those uncertainties to assess the model consequence and reliability would be useful. The model was validated by evaluating the accuracy of the 8-subject population in simulating acalabrutinib exposure from 16 different clinical scenarios. The model predicted the C max and AUC ratios between test (W026394) and reference (L0505009) batches were close to 1.0, and the 90% confidence intervals were comprised between the bioequivalence (BE) limits of 0.8–1.25. 2.5.3 Regulatory Perspective Overall, considering the totality of evidence, the risk of bioinequivalence for drug product batches W026394 and L0505009 due to dissimilar dissolution at high pH (i.e., pH 4.5 and above) was low. However, the application of PBBM for future use is considered limited due to uncertainties. Specifically, questions remain concerning the use of an 8-subject data set as representative of the wider population (without being able to capture within-subject variability) and the selection of fitted parameters without appropriate justification. To support future application of the PBBM, additional data from clinical studies involving DDI could support assumptions regarding CYP3A4 V max . It could also be beneficial to incorporate power and sample size calculations based on the observed variabilities from population studies so that the model would have greater utility and wider generalizability. As a future discussion point for the modeling community, there were concerns and unknown consequences from health authorities on the topic of model multiplicity. There were at least 3 Acalabrutinib PBBMs highlighted in this case study: (1) the GastroPlus model submitted for regulatory approval to the U.S. FDA, (2) the GastroPlus model described in peer-reviewed publications, and (3) the model developed in Simcyp. This adds an additional layer of complexity, as slight differences were noted between each model. Where is the boundary for “fit for purpose”? In an ideal world, would there be one model for one drug product, one model that would be used throughout the drug product’s entire lifecycle for all purposes (e.g., DDIs, postapproval changes, biowaivers, etc.)? Background AstraZeneca submitted a PBBM case study based on publicly available data from several publications on acalabrutinib capsules. , , Acalabrutinib is a BCS Class II weak diprotic base drug substance formulated as a 100 mg IR capsule for the treatment of adult patients with mantle cell lymphoma who have received at least one prior therapy. The purpose of the submitted PBBM was to evaluate if differences in the in vitro dissolution between two drug product batches had an impact on the in vivo absorption, measured via PK end points. Specifically, during product development, two batches (W026394 and L0505009) had similar dissolution profiles in low pH media (pH 1) but had different dissolution profiles in pH 4.5 Acetate Buffer and FaSSIF media as assessed by the similarity factor (f2). It is noted that both batches were dosed in clinical trials in parallel studies with adequate outcomes. Model Development, Validation, and Application In summary, the PBBM strategy involved modeling of individual subject PK data and then validating whether that population was able to reproduce the observed mean C max and AUC from several different clinical scenarios. Individual models were constructed via top-down analysis for an 8-subject population for which microdose IV and oral administration capsule PK data were available. In building the oral absorption model, gut V max for CYP3A4 was individually fitted based on oral PK profiles, and a subject-specific gastric retention time was added to account for observed lag times. In vitro dissolution was incorporated into the model mechanistically through the P-PSD approach. In the discussion of the P-PSD approach, it was noted that an appropriate number of in vitro dissolution data points are useful for fitting (i.e., to capture the full profile), that the fewest number of bins should be used for fitting, and that the prediction ability of the fitted P-PSD needs to be validated in several pH media to be considered acceptable. Ideally, a well-structured framework is in place prior to extracting the P-PSD and identifies the dissolution media to be used for P-PSD extraction (and why), the optimization process for fitting and reducing the number of bins, and the steps for validation. For the acalabrutinib case study, however, a P-PSD was extracted for 4 different drug product batches using different dissolution conditions (i.e., pH 1 for Phase 1 capsules vs pH 6.8 for batches representative of commercial capsules) and the number of bins (i.e., 10) was not fully justified. For the PSA, several physiological and drug related parameters were varied to assess their impact on acalabrutinib exposure for one subject of the 8-subject population. The one subject was selected to be representative of the population based on their total clearance, volume of distribution, and gut CYP3A4 V max . From the PSA, it was clear that there are relevant differences in acalabrutinib exposure ( C max and AUC) due to several parameters; however, only CYP3A4 V max and gastric residence time were incorporated into the model with individual fitting for both parameters. Other parameters, such as P eff , were assumed to be constant across the population without sufficient justification. In addition, the model would benefit from clarity regarding the ranges of the parameters tested and whether they are representative of the ranges expected in the greater population. Addressing the uncertainty regarding input parameters and the potential clinical relevance of those uncertainties to assess the model consequence and reliability would be useful. The model was validated by evaluating the accuracy of the 8-subject population in simulating acalabrutinib exposure from 16 different clinical scenarios. The model predicted the C max and AUC ratios between test (W026394) and reference (L0505009) batches were close to 1.0, and the 90% confidence intervals were comprised between the bioequivalence (BE) limits of 0.8–1.25. Regulatory Perspective Overall, considering the totality of evidence, the risk of bioinequivalence for drug product batches W026394 and L0505009 due to dissimilar dissolution at high pH (i.e., pH 4.5 and above) was low. However, the application of PBBM for future use is considered limited due to uncertainties. Specifically, questions remain concerning the use of an 8-subject data set as representative of the wider population (without being able to capture within-subject variability) and the selection of fitted parameters without appropriate justification. To support future application of the PBBM, additional data from clinical studies involving DDI could support assumptions regarding CYP3A4 V max . It could also be beneficial to incorporate power and sample size calculations based on the observed variabilities from population studies so that the model would have greater utility and wider generalizability. As a future discussion point for the modeling community, there were concerns and unknown consequences from health authorities on the topic of model multiplicity. There were at least 3 Acalabrutinib PBBMs highlighted in this case study: (1) the GastroPlus model submitted for regulatory approval to the U.S. FDA, (2) the GastroPlus model described in peer-reviewed publications, and (3) the model developed in Simcyp. This adds an additional layer of complexity, as slight differences were noted between each model. Where is the boundary for “fit for purpose”? In an ideal world, would there be one model for one drug product, one model that would be used throughout the drug product’s entire lifecycle for all purposes (e.g., DDIs, postapproval changes, biowaivers, etc.)? Case Study 9: A Retrospective Case Study on Fluconazole. Øyvind Holte (Norwegian Medical Products Agency) 2.6.1 Background The data included in this case study was selected from a wide body of data that exists for fluconazole–different strengths of tablets and capsules, oral solution, and also an intravenous formulation. The results of several clinical PK studies, performed between 1983 and 2019, were available for development and verification of the model. The company investigated whether PBBM could demonstrate bioequivalence between the various drug products despite significantly differing dissolution profiles and whether a validated PBBM approach could provide the ability to establish a dissolution safe space for bioequivalence. 2.6.2 Model Development, Validation, and Application IV formulation data were used to confirm the clearance and volume of distribution for fluconazole, readily available from the literature. Second, GI absorption of fluconazole was estimated based on the exposure following dosing of oral solutions (two concentrations). Finally, oral solid dose formulations (tablets and hard capsules) were included in the model, supported by the in vitro dissolution performance (Weibull parametrization or the Johnson model–particle size distribution of fluconazole). A total of 17 simulations were performed to develop the model. Separate data sets were used for model development and model validation. The model concluded that there is no significant food effect for the oral hard capsules. Likewise, fluconazole PK is not affected by the concomitant intake of antacid. The model was further used to predict the bioequivalence ( C max and AUC) of a series of oral solid formulations exhibiting a range of in vitro dissolution rates. Compared to a commercial formulation, some of these formulations had dissolution profiles that were clearly not “similar” based on the f2 algorithm. In other words, these dissolution data would typically not be accepted to support a BCS-based biowaiver. The model predicted that some of these formulations were bioequivalent, regardless of an f2 < 50. The formulations with the slowest dissolution rate were predicted by the model to be nonbioequivalent. These results were used to justify a possible widening of the acceptance criteria for the dissolution test. VBE trials were ultimately performed to replicate the results of the previously conducted PK studies. Furthermore, VBE trials were used to establish the appropriate dissolution criteria, based on virtual batches having dissolution profiles between an unacceptable (slow) batch and the slowest among the acceptable clinical batches. Based on the VBE trials, a suitable acceptance criterion is NLT 80% dissolved in 75 min. This is substantially wider than the current acceptance criterion at 30 min, which is normal for an immediate-release drug product. 2.6.3 Regulatory Perspective It is acknowledged that for the purpose of this case study all relevant details were not available. The clinical trials were conducted without any intent of supporting PK modeling, and certain drug product details relevant to modeling are not available. Based on the data presented for this case study, the regulators had many questions regarding the conclusions made by the company. There are uncertainties regarding the model’s ability to predict the PK of fluconazole. VBE trials were performed, based on the model, to recapitulate the observed results from the available BE studies. However, certain assumptions made by the company were in question, and the conclusions made based on the VBE were not the same conclusions as found by the various regulatory authorities. In conclusion, based on the data provided with the case study, the PBBM represents limited value and would probably not be considered sufficient as a substitute for clinical data in a regulatory setting. The company’s conclusions, which were supported by the model predictions, would normally use a bioequivalence study approach (in the absence of modeling). From a patient safety perspective, future batches of a drug product should not differ significantly from the batches used in a pivotal clinical trial. Therefore, wide dissolution rate acceptance criteria are normally not acceptable. A large batch-to-batch variation could indicate nonbioequivalence. It is acknowledged that, for certain drug products, the in vitro dissolution rate may not be directly related to clinical efficacy and safety, and relatively large differences can be acceptable. PBBM is well suited to support such decisions. The model development presented with this case study is based on a substantial amount of clinical data–more than what can be expected for a new drug product under development. Still, the data have certain deficiencies. As indicated above, the clinical trials were not planned and conducted with the development of a PBBM in mind. For example, detailed information regarding the PSD was not available for all of the batches, and this model input parameter was therefore assumed or estimated. Also, the conditions used for dissolution testing were not the same for all of the drug products: A higher paddle rotation speed can lead to a faster dissolution rate. This makes the head-to-head comparison of the various dissolution results and their use as model input difficult. For a bottom-up modeling approach, such uncertainties reduce the credibility of the model predictions. Apparently, no sensitivity analysis was performed during the model development. Several of the simulations overestimated the C max and/or the AUC, and no efforts were made to adjust or correct the initial model based on these observations. Although the model predicted no significant effect of concomitant antacid or food intake, the confidence in such results is reduced by the underlying uncertainty of each model estimation. In conclusion, it is believed that the presented PBBM would not be accepted as a substitute for BE trials to support a marketing authorization. However, the concerns indicated above would possibly be resolved during an application procedure. Background The data included in this case study was selected from a wide body of data that exists for fluconazole–different strengths of tablets and capsules, oral solution, and also an intravenous formulation. The results of several clinical PK studies, performed between 1983 and 2019, were available for development and verification of the model. The company investigated whether PBBM could demonstrate bioequivalence between the various drug products despite significantly differing dissolution profiles and whether a validated PBBM approach could provide the ability to establish a dissolution safe space for bioequivalence. Model Development, Validation, and Application IV formulation data were used to confirm the clearance and volume of distribution for fluconazole, readily available from the literature. Second, GI absorption of fluconazole was estimated based on the exposure following dosing of oral solutions (two concentrations). Finally, oral solid dose formulations (tablets and hard capsules) were included in the model, supported by the in vitro dissolution performance (Weibull parametrization or the Johnson model–particle size distribution of fluconazole). A total of 17 simulations were performed to develop the model. Separate data sets were used for model development and model validation. The model concluded that there is no significant food effect for the oral hard capsules. Likewise, fluconazole PK is not affected by the concomitant intake of antacid. The model was further used to predict the bioequivalence ( C max and AUC) of a series of oral solid formulations exhibiting a range of in vitro dissolution rates. Compared to a commercial formulation, some of these formulations had dissolution profiles that were clearly not “similar” based on the f2 algorithm. In other words, these dissolution data would typically not be accepted to support a BCS-based biowaiver. The model predicted that some of these formulations were bioequivalent, regardless of an f2 < 50. The formulations with the slowest dissolution rate were predicted by the model to be nonbioequivalent. These results were used to justify a possible widening of the acceptance criteria for the dissolution test. VBE trials were ultimately performed to replicate the results of the previously conducted PK studies. Furthermore, VBE trials were used to establish the appropriate dissolution criteria, based on virtual batches having dissolution profiles between an unacceptable (slow) batch and the slowest among the acceptable clinical batches. Based on the VBE trials, a suitable acceptance criterion is NLT 80% dissolved in 75 min. This is substantially wider than the current acceptance criterion at 30 min, which is normal for an immediate-release drug product. Regulatory Perspective It is acknowledged that for the purpose of this case study all relevant details were not available. The clinical trials were conducted without any intent of supporting PK modeling, and certain drug product details relevant to modeling are not available. Based on the data presented for this case study, the regulators had many questions regarding the conclusions made by the company. There are uncertainties regarding the model’s ability to predict the PK of fluconazole. VBE trials were performed, based on the model, to recapitulate the observed results from the available BE studies. However, certain assumptions made by the company were in question, and the conclusions made based on the VBE were not the same conclusions as found by the various regulatory authorities. In conclusion, based on the data provided with the case study, the PBBM represents limited value and would probably not be considered sufficient as a substitute for clinical data in a regulatory setting. The company’s conclusions, which were supported by the model predictions, would normally use a bioequivalence study approach (in the absence of modeling). From a patient safety perspective, future batches of a drug product should not differ significantly from the batches used in a pivotal clinical trial. Therefore, wide dissolution rate acceptance criteria are normally not acceptable. A large batch-to-batch variation could indicate nonbioequivalence. It is acknowledged that, for certain drug products, the in vitro dissolution rate may not be directly related to clinical efficacy and safety, and relatively large differences can be acceptable. PBBM is well suited to support such decisions. The model development presented with this case study is based on a substantial amount of clinical data–more than what can be expected for a new drug product under development. Still, the data have certain deficiencies. As indicated above, the clinical trials were not planned and conducted with the development of a PBBM in mind. For example, detailed information regarding the PSD was not available for all of the batches, and this model input parameter was therefore assumed or estimated. Also, the conditions used for dissolution testing were not the same for all of the drug products: A higher paddle rotation speed can lead to a faster dissolution rate. This makes the head-to-head comparison of the various dissolution results and their use as model input difficult. For a bottom-up modeling approach, such uncertainties reduce the credibility of the model predictions. Apparently, no sensitivity analysis was performed during the model development. Several of the simulations overestimated the C max and/or the AUC, and no efforts were made to adjust or correct the initial model based on these observations. Although the model predicted no significant effect of concomitant antacid or food intake, the confidence in such results is reduced by the underlying uncertainty of each model estimation. In conclusion, it is believed that the presented PBBM would not be accepted as a substitute for BE trials to support a marketing authorization. However, the concerns indicated above would possibly be resolved during an application procedure. Panel Discussion The panel discussion brought together the following regulators from multiple health authorities: Rebecca Moody (FDA), Luiza Borges (ANVISA), Maria Malamatari (MHRA), Øyvind Holte (Norwegian Medical Products Agency), Shereeni Veerasingham (Health Canada), and Shinichi Kijima (PMDA). The moderators were Paul Seo (FDA) and Sumit Arora (Janssen). The panel members were asked a series of questions regarding model parametrization. 2.7.1 Q1: What Is Your Opinion on the Use of Fitted Parameters versus Generated Data. In Particular, What Level of Fitting/Extrapolation Would Be Acceptable? Øyvind Holte (Norwegian MPA) pointed out that, if model input parameters are fitted, it would be useful for them to be constant during model verification and validation where relevant. The model verification would in fact highlight whether the assumptions made or the model parameters that were fitted are correct (or not). For example, when dissolution data are introduced in a PBBM with a mechanistic model such as the Z-factor or P-PSD, the adequacy of the Z-factor or P-PSD should be verified in vitro by checking if the dissolution of the same batch obtained using different methodologies can be adequately predicted. This step should be made on several drug product batches of the same formulation and process to verify the dissolution model adequacy prior to its introduction in the PBBM. The panelists expressed the need for more data to demonstrate how the P-PSD works. Xavier Pepin (Simulations Plus, Inc.) responded that the P-PSD represents the surface of drug substance available in the drug product for dissolution and a measurement of this surface area with an orthogonal technique could be difficult (See ). Ultimately, the P-PSD validation in vitro and in vivo in different conditions of the GI tract demonstrates its usability, as was suggested by the panelists. 2.7.2 Q2: How Important Is the Model Contribution to the Regulatory Decision for Quality Aspects of Drug Development, Submission, and Postapproval Changes? Kuemmel et al. have developed a credibility assessment framework applicable to model informed drug development which defines a model influence, i.e., whether there exist additional data to support the question that the model tries to answer, and the decision consequence, i.e., the potential consequences to the patients if the decision supported by the model would be wrong. Both model influence and decision consequences can be used to assess the risk of the PBBM. Shereeni Veerasingham (HC) stated that there is no current guideline in Canada regarding the development, validation, and use of PBBM. A case-by-case approach is employed, and the totality of the data submitted to support the file application is used to guide the decision. Luiza Borges (ANVISA) pointed out that, for ANVISA, the PBBM is evaluated in terms of proposed application, development, and validation. The identification of the most influential model parameters is key. The data sets used for model validation are also examined for relevance. Uncertain parameters that are fitted would be expected to be highlighted. Finally, the totality of the relevant data provided for the model application is then considered for the evaluation. Shinichi Kijima (PMDA) indicated that a few submissions to PMDA were reviewed using a quality decision making process, and PMDA’s cross functional team was involved in those reviews. Rebecca Moody (FDA) stated that FDA typically reviews submissions of PBBMs with an interdisciplinary approach. The aim of the review is to understand the risks to the patient and what the model indicates in terms of product quality variations. Like other agencies, the totality of the data is considered to support the decision. Øyvind Holte (Norwegian MPA) indicated that the number of PBBM cases reviewed by EMA is currently less than 5 and that EMA is therefore relatively new to this type of submission. It was also recommended to contact EMA in advance, if the intent of the PBBM is to waive a clinical evaluation, to set respective expectations, to agree on a process, and to organize the right review team. 2.7.3 Q3: What Is the Level of Parameter Justification Expected for a PBBM? Panelists indicated that, whether parameters originate from experiments or fitted to other sources of data, it is useful for the measuring methods to be standard and well described. Fitting parameters within an acceptable range is not prohibited; however, justification with adequate scientific references would be helpful. 2.7.4 Q4: Are Virtual Bioequivalence Studies Acceptable? Panelists indicated that they see that the number of VBE studies in PBBM submissions is growing. Since this is a clear direction that industry is taking, the panelists suggested that the populations included in VBE studies should be wider. In addition, the within-subject variability should be present, ideally using mechanistic models and compared to that observed in the clinic as much as possible. The virtual studies would be expected to reproduce the observed variability. 2.7.5 Q5: Are There Any Other Expectations in Terms of the Content and Format for Submitted PBBMs? Panelists mentioned that visualization of the whole modeling strategy is very important, in addition to the assumptions made and their verification. Panelists also expressed the desire to see the model development history, i.e., why certain changes were made from default values, their magnitude, and how it impacted the model outcome. The industry participants believe that a report template could be useful for both regulators and industry to set expectations for future submissions. It would be important to include some details in each section to describe data expectations, with some examples. A template will be proposed by industry experts as a separate article. Q1: What Is Your Opinion on the Use of Fitted Parameters versus Generated Data. In Particular, What Level of Fitting/Extrapolation Would Be Acceptable? Øyvind Holte (Norwegian MPA) pointed out that, if model input parameters are fitted, it would be useful for them to be constant during model verification and validation where relevant. The model verification would in fact highlight whether the assumptions made or the model parameters that were fitted are correct (or not). For example, when dissolution data are introduced in a PBBM with a mechanistic model such as the Z-factor or P-PSD, the adequacy of the Z-factor or P-PSD should be verified in vitro by checking if the dissolution of the same batch obtained using different methodologies can be adequately predicted. This step should be made on several drug product batches of the same formulation and process to verify the dissolution model adequacy prior to its introduction in the PBBM. The panelists expressed the need for more data to demonstrate how the P-PSD works. Xavier Pepin (Simulations Plus, Inc.) responded that the P-PSD represents the surface of drug substance available in the drug product for dissolution and a measurement of this surface area with an orthogonal technique could be difficult (See ). Ultimately, the P-PSD validation in vitro and in vivo in different conditions of the GI tract demonstrates its usability, as was suggested by the panelists. Q2: How Important Is the Model Contribution to the Regulatory Decision for Quality Aspects of Drug Development, Submission, and Postapproval Changes? Kuemmel et al. have developed a credibility assessment framework applicable to model informed drug development which defines a model influence, i.e., whether there exist additional data to support the question that the model tries to answer, and the decision consequence, i.e., the potential consequences to the patients if the decision supported by the model would be wrong. Both model influence and decision consequences can be used to assess the risk of the PBBM. Shereeni Veerasingham (HC) stated that there is no current guideline in Canada regarding the development, validation, and use of PBBM. A case-by-case approach is employed, and the totality of the data submitted to support the file application is used to guide the decision. Luiza Borges (ANVISA) pointed out that, for ANVISA, the PBBM is evaluated in terms of proposed application, development, and validation. The identification of the most influential model parameters is key. The data sets used for model validation are also examined for relevance. Uncertain parameters that are fitted would be expected to be highlighted. Finally, the totality of the relevant data provided for the model application is then considered for the evaluation. Shinichi Kijima (PMDA) indicated that a few submissions to PMDA were reviewed using a quality decision making process, and PMDA’s cross functional team was involved in those reviews. Rebecca Moody (FDA) stated that FDA typically reviews submissions of PBBMs with an interdisciplinary approach. The aim of the review is to understand the risks to the patient and what the model indicates in terms of product quality variations. Like other agencies, the totality of the data is considered to support the decision. Øyvind Holte (Norwegian MPA) indicated that the number of PBBM cases reviewed by EMA is currently less than 5 and that EMA is therefore relatively new to this type of submission. It was also recommended to contact EMA in advance, if the intent of the PBBM is to waive a clinical evaluation, to set respective expectations, to agree on a process, and to organize the right review team. Q3: What Is the Level of Parameter Justification Expected for a PBBM? Panelists indicated that, whether parameters originate from experiments or fitted to other sources of data, it is useful for the measuring methods to be standard and well described. Fitting parameters within an acceptable range is not prohibited; however, justification with adequate scientific references would be helpful. Q4: Are Virtual Bioequivalence Studies Acceptable? Panelists indicated that they see that the number of VBE studies in PBBM submissions is growing. Since this is a clear direction that industry is taking, the panelists suggested that the populations included in VBE studies should be wider. In addition, the within-subject variability should be present, ideally using mechanistic models and compared to that observed in the clinic as much as possible. The virtual studies would be expected to reproduce the observed variability. Q5: Are There Any Other Expectations in Terms of the Content and Format for Submitted PBBMs? Panelists mentioned that visualization of the whole modeling strategy is very important, in addition to the assumptions made and their verification. Panelists also expressed the desire to see the model development history, i.e., why certain changes were made from default values, their magnitude, and how it impacted the model outcome. The industry participants believe that a report template could be useful for both regulators and industry to set expectations for future submissions. It would be important to include some details in each section to describe data expectations, with some examples. A template will be proposed by industry experts as a separate article. Breakout Sessions The overview of Day 1 presentations and BO sessions is presented in . 3.1 BO Session A - Solubility: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Deanna Mudie (Lonza) and was led by Evangelos Kotzagiorgis (EMA) and Claire Mackie (Janssen), with Tessa Carducci (Merck & Co., Inc., Rahway, NJ, USA) and Mario Cano-Vega (Amgen) as scribes. 3.1.1 Presentation Solubility is a fundamental driver of drug bioperformance. It is one of the fundamental properties that defines the BCS and is an important input to PBBM. Generally, it defines the maximum concentration of a drug in solution (e.g., in GI fluid) at equilibrium or a metastable, supersaturated state. A compound’s solubility is influenced by the interplay between the properties of the drug, the excipients within the formulation, and the GI fluid. This interplay affects the overall bulk solubility along the GI tract and the solid particle surface solubility, as well as solubilization in bile, fats, and formulation components. , Overall, solubility impacts a drug’s oral bioperformance via its influence on properties such as dissolution, precipitation, and maximum concentration in solution, i.e., the driving force for absorption. 3.1.1.1 Case Study 1: Impact of Excipients on Solubility and Dissolution Deanna Mudie discussed a case study showing how excipients can impact the solubility and dissolution rate of the BCS Class 2 drug substance, belinostat. Belinostat was formulated as three different spray dried amorphous solid dispersions (ASDs) using different dispersion polymers, one enteric (HPMCAS-M) and the other two neutral (PVP K30 and PVP VA64). Belinostat amorphous solubility was measured in the absence and presence of these polymers using an in vitro UV solvent shift test. When no polymer was present, amorphous solubility exceeded 1800 μg/mL in gastric medium (pH 2 HCl) and 2500 μg/mL in intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder). However, in the presence of polymer, the amorphous solubility was depressed at least 2- to 6-fold with the highest depression for PVP VA. When the extent of dissolution of ASDs was measured in a nonsink dissolution test in intestinal medium, the results matched the amorphous solubility values measured in the UV solvent shift test. However, the results differed when a transfer dissolution test was run with ASDs dissolved in a gastric medium (pH 2 HCl) at a nonsink dose, where concentrated intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder) was added after 30 min . In this case, while the PVP VA and PVP K30 ASDs reached the solubilities measured in the solvent shift test, solubility was significantly lower for the ASD made with HPMCAS-M. This was because these ASD particles aggregated in the gastric medium due to the low solubility of HPMCAS-M at acidic pH. In vitro dissolution profiles were incorporated into oral absorption simulations, using the Takano Z-factor method in GastroPlus. The HPMCAS-M ASD had the smallest z-factor and the largest calculated effective particle radius, reflecting the particle aggregation observed in the dissolution test. The PVP K30 ASD had the highest z-factor and driving force for dissolution. This mirrors an in vivo study in fasted beagles, where the PVP K30 ASD performed best . Furthermore, oral absorption simulations gave a good description of the concentration–time profiles. It was clear that the ASD dispersion polymer impacted the belinostat in vivo performance by attenuating amorphous solubility and driving effective particle size. High belinostat and polymer solubility in gastric medium maximized in vitro dissolution rate and in vivo AUC and C max . 3.1.1.2 Case Study 2: Impact of Excipients on Solubilization and Permeability In another example, Deanna Mudie showed how nanosized drug–polymer colloids can increase the driving force for absorption. This example was for itraconazole, a highly lipophilic BCS 2 weak base formulated as spray dried ASDs using different grades of HPMCAS. Itraconazole ASDs formed nanosized drug–polymer colloids in the intestinal donor medium of an in vitro membrane flux test, contributing to “dissolved” concentrations above the amorphous solubility . Concentration and size of drug–polymer colloids were determined using microcentrifugation, ultracentrifugation, and dynamic light scattering. More colloids were produced with the ASD made using hydrophilic HPMCAS-L than with the more hydrophobic HPMCAS-H. The marketed formulation, Sporanox, did not form drug–polymer colloids. Drug–polymer colloids increased the rate of permeation into the acceptor medium of the in vitro membrane flux test with the fastest rate seen for the highest colloid-forming, HPMCAS-L ASD. Faster permeation occurs because absorption of these formulations is limited by the unstirred water layer (UWL) adjacent to the membrane, and drug–polymer colloids increase effective drug diffusivity by acting as “shuttles” and helping to replenish free drug at the membrane surface. , This phenomenon was accounted for in oral absorption simulations by modifying the effective permeability ( P eff ) in GastroPlus to account for the higher P eff of colloid-forming formulations ( P eff, nano ). When these ASDs were administered to fasted rats, a trend similar to the in vitro experiments was observed, with the highest absorption rates corresponding with the highest colloid concentrations. Absorption simulations captured the concentration–time profiles well . However, drug–polymer colloids do not always improve the absorption. Drug–polymer colloids have the potential to improve absorption by increasing effective drug diffusivity when absorption is solubility-permeability-limited and permeation is UWL limited. Also, the colloid concentration must be large compared to the concentration of unbound plus micelle bound drug. The influence of drug–polymer colloids on permeation can be predicted by comparing calculated P eff, nano to P eff and running PSAs. For this case study, it was concluded that drug–polymer colloids in excess of amorphous solubility increased the absorption rate of itraconazole ASDs. Drug–polymer colloid concentration can be measured in vitro, and P eff, nano can be used to model the influence on in vivo performance. 3.1.1.3 Case Study 3: Impact of Dissolved Drug on Surface Solubility and Dissolution Deanna Mudie discussed how dissolved acidic or basic drugs can influence solid particle surface solubility and dissolution rate by modulating the surface pH. This example was for acalabrutinib, a BCS 2 weak base. Acalabrutinib free base shows a 43% reduction in AUC when taken with PPI due to reduced solubility and gastric dissolution at elevated gastric pH. A maleate salt form of acalabrutinib mitigates this effect. Surface pH can be estimated in vitro by measuring the pH of a saturated solution of the drug in the relevant medium. Results of measurements of acalabrutinib in HCl or NaOH were shown for an acalabrutinib ASD, the crystalline free base, and the maleate salt form. For the crystalline and amorphous free base, the pH of a saturated solution was higher than the starting bulk pH below the highest acalabrutinib p K a , with a larger pH change for the amorphous drug due to its higher intrinsic solubility. On the other hand, a saturated solution of the maleate salt form showed minimal pH change at low pH, but a decrease in slurry/surface pH above pH max . Modeling dissolution rate using bulk rather than surface pH carries a risk of misrepresenting dissolution rate for cases when surface pH differs from bulk medium pH. Surface solubility can be accounted for in oral absorption software by, for example, setting bulk pH equal to surface pH or inputting surface solubility rather than bulk solubility as a function of pH in, e.g., GastroPlus. Bottom-up oral absorption predictions of crystalline and amorphous acalabrutinib in fasted beagle dogs treated with either pentagastrin (gastric pH ∼ 1–2) or famotidine (gastric pH ∼ 6–7) provided good in vivo study prediction accuracy (absolute average fold error of AUC 0-inf < 1.6). However, not accounting for surface pH/solubility only modestly affected the simulations. A 15–20% difference in simulated AUC and C max was observed for the crystalline free base in pentagastrin-treated dogs, with no difference for the other simulations. This result is attributed to the rapid dissolution rate and solubility-limited absorption of acalabrutinib at bulk pH 2 and similarity between bulk and surface pH at pH 6. However, Pepin et al. modeled dissolution rate of crystalline acalabrutinib and found that use of bulk instead of surface solubility led to an overall 48% overprediction across the GI pH range, with prediction error highest at bulk pH 4.5 (up to 250%) where a difference between surface and bulk pH is observed and dissolution rate is much slower. Deanna Mudie discussed some criteria for predicting when a weakly basic or acidic drug or excipient would tend to modulate surface pH and dissolution. For example, the tendency for pH modulation increases as weak acid p K a decreases or weak base p K a increases, when intrinsic solubility increases, and when buffer capacity decreases. Published calculations using inputs such as p K a (s), intrinsic solubility, and buffer properties can be used to predict when surface pH is not equal to bulk pH. , In addition, surface pH changes are most likely to impact oral absorption simulations when dissolution is rate-limiting. PSAs were conducted to determine the sensitivity. For this case study, it was concluded that acalabrutinib can modulate surface pH, and the extent and direction of pH modulation depends on solid form type (e.g., amorphous, crystalline, salt). The extent to which drug surface pH modulation in vitro manifests as changes in AUC and C max in vivo and in silico depends on drug, formulation, and fluid properties. To end the talk, Deanna Mudie concluded that solubility drives oral bioperformance through dissolution, precipitation, and permeation and is influenced by the interplay between the drug, the formulation, and the GI fluids. Importantly, both solubility and bioperformance can be predicted using targeted in vitro tools combined with PBBM. 3.1.2 Discussion During breakout session A, participants discussed fundamental questions regarding the measurement and utilization of solubility data. 3.1.2.1 Q1: What Specifically Do Bulk and Surface Solubility Measurements Assess and Why Are These Assessments Crucial in the Context of PBPK/PBBM Modeling? Bulk drug solubility allows the calculation of drug amount dissolved at equilibrium if the volume of the medium is known, and its properties are not altered with time. Conversely, surface solubility is the drug solubility at the drug solid–liquid interface. While bulk solubility influences factors, such as solution-mediated precipitation, surface solubility drives drug dissolution and surface-mediated precipitation. For weakly acidic and basic drugs, surface pH may deviate from bulk pH when there is an acid–base reaction occurring at the drug liquid interface. , Consequently, measuring both bulk and surface solubility evaluations is important to accurately capture dissolution and precipitation rates in PBBMs. The choice of buffer for these measurements was highlighted as a key consideration and should align with the specific region of the GI tract being simulated. Furthermore, the session discussed the dynamic impact of excipients on the surface and bulk pH. For example, acidulants included in formulations gradually dissolve over time, and the extent of their effect depends on both time and concentration. This comprehensive discussion illuminated the critical role of understanding bulk and surface solubility and the contributing factors in making informed decisions during drug product development. 3.1.2.2 Q2: Which Media (e.g., FaSSIF V1 and V2) Should Be Chosen for Accurate Comparison to the in Vivo Situation, Considering Factors Such as the Presence and Concentration of Bile Salts, Fats in the Stomach, and Buffer pH? Participants agreed that there is not a one-size-fits-all “best” version of simulated GI media to choose for accurate prediction of in vivo conditions but that each may serve distinct purposes in modeling scenarios. , When measuring drug solubilities across different versions of FaSSIF and aspirated human intestinal fluids, researchers have found solubility values to vary between media. , In addition, no single medium captures the normal variation in these fluids. It is important to understand the properties and compositions of different types of simulated media and how they may interact with the drug product of interest to influence solubility, dissolution, and precipitation. For example, fasted state simulated intestinal fluid (FaSSIF) evolved to have a lower buffer capacity when moving from version 1 to version 3. Version 3 incorporates additional bile components (e.g., lecithin hydrolysis products and cholesterol) that are not found in versions 1 or 2. Factors such as buffer capacity and buffer species can impact surface solubility for acidic and basic drugs, and the type and concentration of bile components impact solubilization, especially for lipophilic drugs when nonionized at the medium pH. Some participants noted that FaSSIF v1 appears to be suitable for BCS classes 1 and 3 compounds, whereas FaSSIF v2 may better capture solubilities of some BCS class 2 and 4 compounds. Investigating solubility in the fed state can be challenging due to the dependence of media composition and resulting drug solubility on meal content. In addition, the inclusion of components such as fats in simulated gastric media requires careful preparation and complicated analytical techniques for assessing drug solubility. Nevertheless, gaps in the ability to model drug absorption in the fed state dictate the need to consider the impact of meal components on drug solubility. Several types of simulated fed state media, such as FeSSIF, FeSSGF, and FEDGAS (Biorelevant, London, UK) are available for this purpose. Considering these findings, the session concluded that it is crucial to deliberate whether customizing the buffer for specific applications or establishing standardized buffers is the most prudent approach. In any case, panelists emphasized the importance of providing precise and comprehensive descriptions when selecting buffers or biorelevant media. Given the limited experience in this field, it becomes imperative to offer supplementary information to facilitate a better understanding of the decisions made and their impact on the model. 3.1.2.3 Q3: When Is the Optimal Time to Measure the Solubility in Human Aspirates? Measuring drug solubility in human aspirates has not gained widespread adoption due to factors such as availability and cost; however, participants recognized its potential benefits, especially in improving modeling of poorly soluble, nonionizable lipophilic drugs. These drugs often exhibit wide variation in solubility as a function of micelle or vesicle composition, since simulated fluids (e.g., FaSSIF) lack many endogenous, bile- or vesicle-forming components. Participants reached a consensus that the benefit of using aspirated human fluid rather than simulated fluid is probably less important if the drug is ionized in the GI tract. In these cases, pH is the main driver of solubility. 3.1.2.4 Q4: For Weak Bases, Is There Added Value in Measuring Solubility Across a Broad pH Range, Specifically pH 8–9? If so, Which Media Should Be Considered? The participants agreed that the pH range over which solubility is measured is an essential factor to consider for weakly basic and weakly acidic drugs. This pH range should cover the GI physiology, i.e., from approximately 1–8. Experimental points should capture multiple degrees of ionization (e.g., 0% ionized, 50% ionized, 90% ionized) depending on the p K a . Measurements at pH values >8 (using NaOH for adjustment) may be needed to capture drug intrinsic solubility for weak bases (i.e., highest basic p K a + 2 pH units). One may also consider determining solubility in purified water and unbuffered media to determine the surface pH of the drug. For salts of weak acids and bases, the measurement of the solubility at and around pH max is recommended. It was emphasized that researchers should measure the medium pH prior to addition of drug and the pH of the final saturated solution. Both start and final pH values should be reported. The media composition should also be documented since they may comprise common ions with the drug substance, which could depress drug solubility, or lead to salt formation which could change the nature of the drug substance. 3.1.2.5 Q5: What Solubility Value Should Be Employed for Release from an Amorphous Solid Dispersion Containing a Polymer? During the session, participants acknowledged the challenges associated with developing PBBMs for dosage forms containing an amorphous solid dispersion (ASD). When modeling release from ASDs its important to understand whether dissolution is controlled by the drug, the polymer, or the combination of the two. When dissolution rate is driven by the drug, the amorphous (i.e., kinetic) solubility in the given medium is likely the appropriate solubility to employ for defining the rate of drug release. However, if the dissolving ASD contains both amorphous and crystalline drugs, then the solubility of the crystalline form in that medium and its impact on drug release may also need to be considered. When modeling drug precipitation and redissolution of ASDs, the amorphous solubility and solubilities of any crystalline forms to which the amorphous drug may precipitate should be considered. Some ASDs may undergo liquid–liquid phase separation (LLPS) and precipitate to amorphous nanodroplets, which may then redissolve according to the amorphous solubility. In other cases, amorphous drug may crystallize, and the solubility of the crystalline form will be an important input to account for drug precipitation and solubility limitations to redissolution along the GI tract. It was also emphasized by participants that measuring amorphous solubility in the presence of formulation excipients, such as polymers, is critical. For example, ASD polymers can either decrease amorphous solubility or increase it through the formation of drug–polymer colloids. , It is worth highlighting that the impact of these excipients varies as a function of the time and concentration. Participants also noted that, for ASDs, acquiring an in-depth understanding of drug speciation, with a particular focus on detecting drug–polymer colloid formation using different analytical techniques, may be necessary since the presence of these species can impact the driving force for drug permeation. These considerations are pivotal for the effective development of PBBMs for ASDs. In conclusion, the breakout session produced several significant takeaways. Participants in this session recognized the inherent complexity of drug solubility and its substantial influence on the development of PBBMs. The discussion brought to the forefront various critical topics, including distinctions between bulk, surface, thermodynamic, and kinetic solubility as well as points to consider during experimental measurements of these parameters. Given the intricate nature of these phenomena, it is strongly encouraged to include details regarding the rationale behind model development for solubility inputs for regulatory submissions. These should comprise the criteria for selecting and applying specific solubility parameters, choosing appropriate models, defining the experimental conditions for measuring solubility values, and highlighting the theoretical assumptions. Additionally, participants advised conducting parameter sensitivity analyses to ensure a robust and comprehensive understanding of the models utilized in drug product quality assessments. Important points to consider when measuring bulk and surface solubilities of crystalline and amorphous drugs and formulations are presented in the Supporting Information . 3.2 BO Session B - Dissolution Part 1: Development of a Biopredictive Dissolution Method This session began with speaker Raimar Loebenberg (University of Alberta) and was led by Paul Seo (FDA) and Nicoletta Fotaki (Bath University), with Ivy Song (Takeda) and Parnali Chatterjee (FDA) as scribes. 3.2.1 Presentation A typical approach for developing biopredictive dissolution methods for oral drug products is to first classify the molecule of interest according to the BCS and its appropriate subclass depending on the molecule’s functional groups. The next steps involve the choice of dissolution medium and dissolution method and their purpose. For example, a dissolution method used for quality control might be composed of pharmacopeial elements while a biopredictive method can use scientifically relevant setups and media mimicking different GI tract environments (e.g., biorelevant media and the Artificial Stomach and Duodenum (AS&D) apparatus). Another important consideration is the mechanism governing bioavailability by either permeability or dissolution-controlled absorption. If the absorption is permeability-controlled, a minimum dissolution acceptance criterion is desired. Faster dissolution will not change the rate and extent of absorption. This is different if the process is dissolution controlled. Here, any change in drug release will alter the rate of absorption. Currently, there is unfortunately no universal dissolution medium available that can be used for all drugs. The following examples highlight which media and dissolution methods might be useful in the development of biopredictive dissolution methods. 3.2.1.1 Example 1: Permeability-Controlled Absorption Etoricoxib is a weak base and is classified as a BCS II drug substance. A study by Okumu et al. showed that, if a transfer model from the acidic stomach conditions into FaSSIF was used, the drug solubility was increased in the simulated intestinal fluid compared to its equilibrium solubility. Essentially, a supersaturated drug solution was formed. Then, a flow-through cell combined with a perfusion protocol mimicking the stomach and the different small intestinal segments was used and a dissolution profile was generated. When this profile was used in simulation software, the observed clinical PK data were predicted with a better fit compared to USP type dissolution profiles. Furthermore, a comparison between a solution and the physiologically mimicking flow-through protocol showed that both resulted in superimposable predictions of the PK profiles. The study concluded that, if the drug is fully dissolved in the stomach, it can form a supersaturated solution in the intestine and behaves like a BCS class I drug. Therefore, the AS&D apparatus may be more appropriate for such BCS IIb drug molecules. 3.2.1.2 Example 2: Dissolution-Controlled Absorption Montelukast sodium is a highly lipophilic drug with acid and basic functional groups. It is a BCS II/IV drug substance. A comparison between dissolution profiles from a USP type 2 apparatus with biorelevant media versus a flow-through protocol using physiologically adapted conditions showed significant differences. In the flow-through cell, the drug release was slower in the first 90 min compared to the USP type test. However, when the data were used in GastroPlus, the flow-through data matched the observed clinical data better than when other dissolution profiles were used as input. An alternative apparatus to the flow-through cell is based on the AS&D apparatus with more compartments. This method is also known as in vivo Predictive Dissolution (iPD). 3.2.1.3 Example 3: Lysosomal Trapping Lysosomal trapping is a potential mechanism to explain slow availability of lipophilic weak bases that otherwise are expected to rapidly appear in the postabsorptive systemic circulation. Predictability of lysosomal trapping is not well developed, although recent efforts aim to standardize testing for lysosomal trapping. Lysosomes are enzyme filled vesicles in the cytoplasm that maintain a low pH inside. A weak base such as dextromethorphan is highly lipophilic at the pH inside of an enterocyte. When the molecule crosses the lipophilic membrane of the lysosome, it finds itself at a much lower pH (4.5–5.5). Here, its hydrophilicity significantly increases due to the drop in pH. Due to this shift in its lipophilic properties, the molecule now needs much longer to exit the lysosome. This is a potential reason it takes more than 16 h for the drug to appear completely in the systemic circulation. Based on simulations, the drug is predicted to completely dissolve in the GI tract and exhibit good permeability. The fraction of the dose absorbed into the enterocytes is about 100% within 2 h. The observed time lapse in the appearance in the systemic circulation is likely due to lysosomal trapping. For drugs such as dextromethorphan, there is a lag time between the fraction of the dose absorbed into the enterocyte and the drug plasma levels. Setting dissolution specifications on the fraction dose absorbed into the enterocyte rather than using drug plasma levels would be beneficial. Recently, an artificial lysosomal fluid and a side-by-side diffusion cell method were developed which can be used to screen for the tendency of drugs to be trapped by lysosomes. 3.2.1.4 Example 4: Enteric Coated Dosage Forms Literature is full of reports that enteric coated dosage forms are failing in vivo. In vitro dissolution testing according to the pharmacopeias uses a two-stage approach in which a dosage form is first tested in acid and then in pH 6.8 phosphate buffer. However, if low buffer capacity carbonate buffer is used instead of phosphate buffer, then the dissolution behavior dramatically changes, and depending on the carbonate concentration, the opening of the enteric coat is delayed. Another in vitro study showed that acidic and basic drugs also impact the delay of the coat opening in the carbonate buffer. Acidic drugs delayed the opening process, while basic drugs increased the coat opening. In low carbonate buffer, the coat opening was much slower compared to phosphate buffer. This was also shown for a failed bioequivalence study of pantoprazole. The dissolutions of the test and reference products were similar in phosphate buffer but differed significantly in carbonate buffer. Thus, carbonate buffers or other surrogates are useful when developing enteric coated dosage forms. 3.2.1.5 Example 5: Biphasic Dissolution Biphasic dissolution uses an organic layer on top of an aqueous dissolution medium as a sink for the lipophilic drug molecules. The test can be combined with a flow-through cell. In the present study, low buffer capacity (5 mmol) and low volumes (200 mL) were compared with regular strength phosphate buffer and 900 mL. Test tablets containing ibuprofen, which were made by direct compression or granulation using different excipients, were investigated. The results showed that low buffer capacity and low immersion medium volumes have the best ability to detect differences in the manufacturing processes and formulations. Furthermore, organic sinks could allow for a rebound in aqueous buffer pH after dissolved drugs, which initially caused a drop in the buffer pH due to their acidic nature, partition into the organic layer. 3.2.1.6 Example 6: Lipid Dissolution The volume of the lymphatic system is larger than that of the vascular system. However, not much attention is given to this compartment in the context of PBBM. Today, many hydrophobic drugs are formulated into lipid drug delivery systems. Long-chain lipids can increase the lymphatic uptake of hydrophobic drugs. This occurs inside the enterocyte. Here, triglycerides and phospholipids are assembled into chylomicrons. Lipophilic drugs can be loaded into the chylomicrons and exit the enterocyte via the lymphatic pathway. An artificial lymphatic fluid was developed and tested regarding its sensitivity to lymphatic inhibition and enhancement uptake. In a study similar to that of biphasic dissolution, a lymphatic compartment was added to a dissolution vessel. Three commercially available drug products containing terbinafine were tested in a USP type vessel and a flow-through cell. The aqueous dissolution of one product was significantly different from that of the other two products. This might be due to excipient differences in the formulations. However, the three products also showed differences in the accumulation of the drug in the lymphatic compartment. This new method is a promising approach to assessing formulations for their lymphatic uptake potential. The model might contribute to in vitro bioequivalence guidelines for lymphotropic formulations. 3.2.1.7 Conclusions First and foremost, the development of a dissolution method is driven by its purpose. When the development of a biorelevant, biopredictive dissolution method is the goal, the following may be considered: Flow-through cells and transfer-models are useful for dynamic dissolution protocols; small volumes and low buffer concentrations could be considered to mimic the physiological environments in the GI tract; carbonate buffers or suitable surrogates are helpful when evaluating enteric coated formulations; biphasic dissolution is an important tool to mimic the GI environment with dissolution and absorption occurring in parallel; and lipid dissolution is a promising approach to assess excipient effects for lymphotropic drugs. 3.2.2 Discussion This breakout session expanded and continued the discussions of the Hot Topic B on “Best Practices for Development of Biopredictive Dissolution Methods” as input into PBBM by taking into consideration the following questions. 3.2.2.1 Q1: When Biorelevant Dissolution Methods (e.g., Multicompartmental) Are Necessary, What Is the Best Way to Use These Methods? Developing a dissolution method should be dependent on its intended use, i.e., whether the method would be used for quality control purposes or for PBBM. For example, for screening for precipitation of weak bases, two-stage tests or transfer models can be useful. Biorelevant dissolution methods mimic biological fluids and physiology and may be developed solely to support PBBM, with no link to the QC dissolution method. In this case, the biopredictive nature of the biorelevant method is verified through the PBBM. 3.2.2.2 Q2: How Many Different Experimental Conditions Should Be Used for a Single Batch? There is no fixed number of experimental conditions that should be used to develop a biopredictive dissolution method. However, relevant sets of experiments could be conducted taking into consideration GI physiology, bile salts, buffer capacity, physicochemical properties of the DS, product design, and release mechanisms to develop biopredictive dissolution methods as input for PBBM. 3.2.2.3 Q3: What Are the Pitfalls of Dissolution (e.g., Degradation, Mixture of Polymorphs, and Precipitation) to Be Careful about and How to Deal with It? Precipitation of drugs is an important consideration in developing a dissolution method. To study the effect of drug precipitation during dissolution testing, transfer experiments are often conducted to estimate the precipitation times as input into PBBM to determine the effect on the bioavailability. 3.2.2.4 Q4: How Do You Separate Artifacts of the Dissolution Test and Its Significance (or Nonsignificance) on in Vivo Response (e.g., Coning Is Often a Dissolution Issue, But Is Minimally a Concern in Vivo)? Sometimes multiple experiments are conducted to address dissolution artifacts such as coning, cross-linking in capsules, etc. The use of Apex vessels (previously known as PEAK vessels) to address coning is gaining regulatory acceptance; however, generating as much data as possible early in the product development to address these issues and determine if the developed dissolution method is biopredictive by conducting a PK study is often critical. 3.2.2.5 Q5: How Should Functional Excipient Effects Be Investigated? What Are the Appropriate Methods and How Should Dissolution Methods Be Developed to Evaluate Excipient Effects? Dissolution methods should take into consideration the effect of key/functional excipients, such as the impact of excipients on bulk vs surface pH. Excipients can alter drug release and absorption; therefore, evaluating the effect of functional excipients early on is crucial. Conducting a pilot in vivo PK study when an important functional excipient is present in the formulation may provide utility when building a dissolution safe space. 3.2.2.6 Q6: Depending on DS and DP Properties, What Level of Variation of Critical Biopharmaceutics Attributes (CBA) Is Needed to Demonstrate Discrimination and a Biopredictive Nature for the Dissolution Method? Depending on the product design, release mechanism, >10% variations in functional excipients, and process parameters of the final formulation could be used to demonstrate the discriminating ability of the biopredictive/QC dissolution method and their impact on the bioavailability of the drug product (especially for basic drugs that have pH modifiers and enteric coatings). 3.3 BO Session C - Dissolution Part 2: Modeling in Vitro Dissolution Data This session began with Xavier Pepin (Simulations Plus, Inc.) and was led by Cordula Stillhart (Roche) and Luiza Borges (ANVISA), with Grace Chen (Takeda) and Megerle Scherholz (BMS) as scribes. 3.3.1 Presentation: Methods for Integrating Dissolution During breakout session C, Xavier Pepin presented a comprehensive overview and description of methods for integrating dissolution profiles into PBBMs, followed by practical considerations on the critical aspects when in vitro dissolution data were used for dissolution model development. This background served as a basis for developing and discussing checklists and a decision tree for the dissolution method selection to support the integration of dissolution data into PBBMs. There are many ways to integrate dissolution into most PBBM platforms. These methods range from lesser to more mechanistic as shown in . For an IR dosage form, using one method over other methods leads to certain assumptions being made regarding the parameters limiting in vivo dissolution. 3.3.1.1 Direct Input The least mechanistic method to integrate dissolution is to use direct input of the in vitro dissolution data into the model. In this case, the assumptions made are that the in vitro dissolution method is representative of the conditions prevailing in vivo, which govern the drug dissolution. In more detail, if such a method is used, one should confirm that neither solubility, drug dose, nor in vivo volume would be limiting the in vivo dissolution, since there are wide differences between the volumes used in vitro and the volumes observed in vivo. In addition, the in vitro hydrodynamics should be representative of in vivo conditions or not impact in vitro release, here again for the same reasons that the in vivo hydrodynamics are different from those in vitro. Such assumptions are reasonable when the drug substance is BCS 1 or BCS 1-like and when the formulation itself is governing the in vitro and in vivo dissolution. 3.3.1.2 Weibull Function The use of a Weibull function fitted to in vitro dissolution data is also a nonmechanistic approach as the in vivo release depends on time only. Similar assumptions to those supporting the direct input of dissolution data are made when using a Weibull function, although it is preferable to use Weibull over direct input, since the Weibull function provides for a smoother dissolution curve passing through the measured dissolution data. For direct input methods, as the number of time points for measuring dissolution is generally limited, interpolating dissolution data with a linear correlation between measurements may lead to inaccurate predictions of in vitro (and in vivo) dissolution. 3.3.1.3 Z-factor The use of the Z-factor vs pH profile or constant Z-factor should provide for a more mechanistic model. The Z-factor introduced by Takano et al. is a lumped factor which is the ratio of drug diffusion coefficient ( D ), divided by the product of true density (ρ), radius of the particle ( r 0 ), and thickness of the unstirred water layer ( h ). 1 It is evident from that the Z-factor can also be expressed as the initial drug particle radius in the formulation. It is also evident from this equation that there is only one bin (one particle size) in the Z-factor. Hence, if the observed in vitro dissolution rate shows more than one phase, a single bin may not be enough to adequately characterize the dissolution of the particles comprised in the formulation. Multiple release phases could arise from the presence of extra granular fine drug substance and granulated drug substance or the presence of drug substance particles that wet at different rates. In theory, there should not be a dependency of Z-factor on pH, as pH governs the drug solubility and is independently considered in the equation proposed by Takano et al. to predict in vitro and in vivo dissolution. In addition, the fact that the drug diffusion coefficient is an integral part of the Z-factor definition should lead to caution when employing the Z-factor to fit dissolution data obtained in media comprising surfactants. Indeed, the influence of surfactant micelle size spans an order of magnitude, which would affect the diffusion coefficient of the drug bound to micelles by the same order of magnitude. The size of common micelles summarized from literature data is shown in . − 3.3.1.4 P-PSD The product particle size distribution (P-PSD) was introduced by Pepin et al. , , where the disappearance of solid drug vs time is expressed as 2 where is the drug fraction unbound, D u is the diffusion coefficient of unbound drug, D b is the diffusion coefficient of micelle bound drug, A ( t ) is the available drug surface area at time t , h u ( t ) is the unstirred water layer thickness for unbound drug, h b ( t ) is the unstirred water layer thickness for micelle bound drug, C S,u is the unbound drug solubility at the surface of the crystal, and C u ( t ) is the unbound drug bulk concentration at time t . A (0) is the initial drug substance surface area which can be represented as a 1 to 10 bin spherical product particle size distribution, the P-PSD. Since the P-PSD can comprise from 1 to 10 bins, there is enough granularity to fit complex dissolution profiles including those presenting multiple phases. The number of bins can be tuned to the observed dissolution data, and it is recommended to start from the minimum number of bins and increase the number of bins until there is no difference in the predictive power across the dissolution data observed. The P-PSD approach can be applied to all dissolution equations beyond the one presented in . In fact, in platforms such as DDDPlus (Simulations Plus), SIVA (Certara), and MoBi (Open Systems Pharmacology [OSP]), the P-PSD can be fitted to observed dissolution data. In the above cases, the P-PSD will take the form of a mean spherical particle radius associated with a distribution across the mean. Only one mode of distribution is currently available in these platforms. The equation proposed by Pepin et al. stems from the approach proposed by Gamsiz et al.; however, it assumes immediate partitioning of drugs to micelles at the surface of the drug, and different thicknesses of the UWL for free and micelle bound drug, according to the equation proposed by Pohl et al: 3 A comparison between the use of Z-factor vs the P-PSD approach is presented in . The increased predictive performance of the P-PSD approach is related to its ability to differentiate the free and micelle bound drug and also the impact of the micelle size on the diffusion coefficient of micelle bound drugs. The Z-factor and P-PSD approach show similar shape description of the 100 mg acalabrutinib capsule batch L0505009 dissolution profile in phosphate buffer, pH 6.8. If this dissolution data is used to fit the Z-factor and P-PSD, prediction of dissolution of the same batch in media comprising bile salts show the advantage of the P-PSD over the Z-factor . The use of the apparent drug solubility in both tested media with the surfactant and the Z-factor fitted on the medium without the surfactant leads to an overestimation of the observed dissolution rate. The drug will dissolve slower due to the smaller diffusion coefficient of micelle bound drug which is best captured with the P-PSD approach and . Recently, two additional models for P-PSD were proposed which integrate the fluid velocity in the USP2 dissolution apparatus, the P-PSD HD, and one model predicting drug and excipient sedimentation and cone formation at the bottom of the USP2 vessel, the P-PSD HDC. These latter models are important to remove the potential bias coming from formulation sedimentation or to integrate the impact of fluid velocity in USP2, which would be important for large particles or large dosage forms such as eroding tablets or pellets. , The P-PSD concept stems from the fact that the drug substance particle size available for dissolution in the drug product cannot be measured adequately with sizing methods, such as laser diffraction applied to the drug substance (DS PSD). DS PSD is an important quality control of a starting material, but the impact of excipients and manufacturing process conditions on the drug substance area available for dissolution cannot be ignored. Process: It is well-known that compression forces during dry granulation or tablet manufacture will lead to fragmentation of brittle drug substances and excipients. Fragmentation will also affect larger particles at low compression forces and show little effect on smaller particles below a threshold size. , The use of a single Diffusion Layer Model (DLM) scale factor applied to the measured DS PSD to predict the effect of processing parameters on the DS surface area available in a final formulation cannot therefore be sustained theoretically. DS Particle Aggregation: Aggregation of primary particles in the DS is another factor that can induce a strong bias to predicting the DS surface available for dissolution. Loose or strong aggregates can form in a drug substance because of material properties, manufacturing process, or storage. Laser diffraction methods would typically size an aggregate of primary particles as one large particle with low surface to volume ratio, leading to an under-estimation of the drug surface area available for dissolution, as easily demonstrated by comparing laser diffraction predicted powder surface area to BET specific surface area for various batches of drug substances showing various levels of aggregation. Shape: The shape of particles will also influence the difference between laser diffraction predicted size and surface area measured with an orthogonal technique such as BET specific surface area. Laser diffraction techniques, which project a volume equivalent sphere for each particle, will introduce a bias to the measurements the further away the particle is from a spherical morphology. Wettability: Finally, the DS particle size cannot predict the impact of the drug substance wetting ability on the dissolution rate. Kim et al. have shown that dry coating the surface of drug crystals with a hydrophilic or hydrophobic material can influence aggregation of particles up to a certain surface coverage and also influence drug dissolution through the alteration of the surface energy of the drug, which would change how water can wet the drug surface. The correlation between drug wettability and dissolution has been reported in the literature, and the formulation scientists frequently employ wetting agents as excipients to improve the wettability of drugs in final formulations. The sensitivity of the dissolution rate to drug wettability is especially pronounced for small particles. For example, nanosizing technologies require the presence of surfactants to achieve the desired size and suspension stability, i.e., preventing aggregation and reducing speed of Ostwald ripening. For all of the reasons highlighted above, the size of DS particles measured prior to processing the DS into the final formulation is rarely a good predictor of the drug substance area available for dissolution. There may be rare exceptions to this rule, for example, if the formulation is a suspension or if the formulation is dry but comprises wettable amorphous spray dried drug particles encapsulated with low energy processes. The effect of formulation excipients and processing parameters should be integrated into the mechanistic modeling approaches of drug product dissolution. The P-PSD or Z-factors can serve this purpose. 3.3.2 Discussion The discussion was centered around 5 key questions. 3.3.2.1 Q1: What Is the Appropriate Dissolution Model for an IR Formulation? A recent review by Anand et al. showed that direct input, Weibull function, Z-factor, or P-PSD approaches were widely applied methods for integrating dissolution in PBBM. Mechanistic approaches like the Z-factor or the P-PSD were mostly used for low-solubility products, and mechanistic methods were applied in 60% of the 27 case studies. The advantages of mechanistic dissolution models over Weibull functions is that the between- and within-subject variability in terms of in vivo dissolution during population modeling can be captured in a more relevant way. Instead of applying random variation of dissolution (as can be achieved with a Weibull function), mechanistic models will rely on variation in system parameters (e.g., volumes, pH, transit times, composition in bile salts) to recalculate a different in vivo dissolution for the drug product for each simulation. This will warrant closer to reality in vivo dissolution compared to random variations. Also, the use of mechanistic models is the only option when the model is to be used to predict the impact of prandial state, pH related DDI, or in vivo dissolution across different populations, all situations where the GI physiological changes may profoundly affect in vivo dissolution rate and make it deviate from the dissolution rate measured in vitro. The criteria to select a dissolution method should therefore be driven by the understanding of the drug product release mechanism and the limitations to in vitro and in vivo dissolution, the impact of manufacturing process and formulation on dissolution, and how well this can be simulated with a given approach. For mechanistic models, it is recommended to generate dissolution data with the same batch in several media/conditions to be able to verify the choice of model and prediction performance in vitro prior to integration of the batch specific data (Z-factor or P-PSD) in the model. Ideally, to perform the fitting of dissolution data to extract the Z-factor or P-PSD, the method chosen would be discriminative, and the batch dissolution would show an adequate profile with possibly full dissolution in the medium considered. Practically, this would correspond to picking a dissolution method where most measured data comprise between 20% and 80% drug dissolved. Typically, a 1×-dissolution method described by Kuiper, where the drug dose divided by the dissolution volume nears the drug solubility in the dissolution medium, ensures maximal discrimination while allowing full dissolution. Using only one method to fit a mechanistic dissolution model over using all dissolution methods simultaneously is optimal, as the integration of nondiscriminating methods may lead to bias in the batch specific Z-factor or P-PSD determination. Based on the strengths and limitations of each individual dissolution modeling method presented during breakout session C, a decision tree for dissolution model selection was discussed with the audience. The proposed decision tree provides considerations for developing a dissolution model depending on the disintegration properties of the dosage form, the occurrence of coning or sedimentation during dissolution testing, and the sensitivity of the dissolution rate toward changes in agitation conditions, volume, dose, and pH, as well as the presence of surfactant in the dissolution medium. The proposed decision tree is tailored to oral IR dosage forms and presents a clear description of the modeling assumptions to be considered when selecting a dissolution model. There was general agreement from the attendees that such a decision tree for dissolution model selection provides a valuable tool for both biopharmaceutics modelers in the pharmaceutical industry as well as for regulators when reviewing submitted PBBM cases . 3.3.2.2 Q2: What Are the Input Parameters Required to Mechanistically Evaluate the in Vitro Dissolution Data? When developing a mechanistic dissolution model, the availability of high-quality input data for model parametrization should be a priority. This includes the availability of a sufficient number of in vitro dissolution profiles collected under relevant experimental conditions depending on the intended purpose of the model. For example, if the PBBM aims at predicting a pH-related DDI, then the dissolution model may need to be developed and validated using in vitro data generated under various pH conditions. Defining the experimental parameters describing the dissolution setup is prudent for each corresponding dissolution data set, and for dissolution media including surfactants, the properties of the micellar system should also be adequately characterized. presents a list of suggested data to collect and could serve as a checklist in the context of the dissolution model development. In addition to the in vitro data that are generated for direct input into the dissolution model, there might be a need to generate supplementary data to support some specific modeling assumptions or to mechanistically explain some anomalies. For example, if the slow dissolution in pure aqueous systems is attributed to poor drug wettability, this hypothesis may be strengthened by the generation of in vitro dissolution data, including a surfactant. Similarly, if in vitro dissolution is slow, presumably due to poor tablet disintegration, the hypothesis may be further supported by the generation of in vitro dissolution profiles of the pure DS or of drug product intermediates (granules or final blend prior to tablet compression). Such mechanistic investigations may not directly feed into the model but provide key information to increase the confidence in the selected model parameters and modeling assumptions. 3.3.2.3 Q3: What Are the Criteria and Acceptable Thresholds for in Vitro Dissolution Model Validation? If more than one mechanistic modeling method may be applicable, the calculation of model performance indicators such as the average fold error (AFE) and absolute average fold error (AAFE) can provide rationale for method choice. Ultimately, the prediction performances of various dissolution modeling methods in the PBBM could also be compared. Examples of dissolution modeling fitting and impact on PBBM prediction are also shared. The outcome can be found in the Supporting Information . 3.3.2.4 Q4: Which Are the Factors to Be Considered When Modeling Dissolution? Prior to the integration of dissolution data into a PBBM, a critical assessment of the quality and relevance of the experimental dissolution data may be useful. In this context, there are several factors to pay attention to, as summarized below. Agitation: The impact of agitation should be considered when choosing an integration method. All models are derived from the Noyes-Whitney equation (i.e., Johnson, , Wang-Flanagan, Takano, Gamsiz, Pepin, or Salehi, ) and rely on the definition of the UWL thickness around dissolving particles. The UWL thickness is a function of fluid velocity around the dissolving particle in the dissolution medium (in vitro and in vivo). When the fluid velocity tends to zero, the thickness of the UWL tends to the radius of the spherical particle; as an approximation, the UWL thickness is equal to the particle radius up to an upper limit of 30 μm, which is supported by simulations and experiments performed in the literature. , Also, this hypothesis fits with the low fluid velocity typically measured in vivo throughout the GI tract, where the average velocity is in the range of 1–2 cm/s, with transient peak velocities of more than 15 cm/s. − For particle sizes larger than 30 μm, the UWL thickness typically depends on the agitation as shown for example by Scholz et al. When a significant impact of agitation on the dissolution rate is shown, the in vitro dissolution model should accommodate the impact of hydrodynamics. Surface pH and Surface Solubility: When the drug shows acidic or basic moieties, depending on the pH and composition of the aqueous dissolution medium, an acid–base reaction can happen locally at the surface of the dissolving drug particles, without necessarily affecting the bulk pH. This reaction will change the pH within the UWL. The maximal changes are observed at the surface of the drug. This phenomenon was described theoretically and experimentally in the literature for weak acids, bases, and their salts thanks to the work of Higuchi et al., , Mooney et al., , and Serajuddin et al. , Since the drug surface solubility drives the dissolution rate, it is imperative to consider the drug surface solubility to mechanistically model in vitro and in vivo dissolution rates. , , , If there is a rapid phase change, such as salt disproportionation to the free base, then the free base surface solubility at the medium pH should be determined. Surface pH, also known as microenvironmental pH, is driven by the drug substance but can also be largely influenced by excipients added to the formulation, , and excipients should be considered when analyzing dissolution data. Formulation composition should always be known so as to evaluate potential interactions between the drug and excipients during dissolution but also in the solid state, as these reactions can also lead to polymorphic transitions. Chemical Degradation: Chemical degradation can happen during dissolution and impact the amount of drug that is dissolved. A typical example is that of rifampicin dissolution in presence of or without isoniazid. The presence of bell shape dissolution curves or the existence of a dissolution plateau less than that of the theoretical batch assay could indicate the potential for in vitro degradation. The degradation rate should be measured in a separate experiment with solubilized drug by measuring the drug concentration over time in the dissolution medium. If degradation is confirmed, it can be integrated into the model (in vitro and in vivo) to account for a better fit of in vitro dissolution and amount of drug available for in vivo absorption. Physical Degradation: Bell shapes or plateaus during dissolution may also demonstrate (beyond the lack of enough solubility or medium volume to dissolve the full drug dose) that a polymorphic drug transition happens or that there is a polymorphic impurity in the drug substance. For example, the mixture of different polymorphic forms with different solubility values will lead to a variation in the rate and extent of dissolution. Precipitation from an amorphous to a crystalline form, or from a salt/cocrystal to its free form, will lead to a change in dissolution rate or even to complete stop of drug dissolution if the precipitation occurs on the surface of the drug product. , The presence of cosolvents or polymers can also change the rate and extent of surface precipitation, and, where relevant, such excipients should be considered critical to the product performance. Drug Product Disintegration: The impact of capsule opening, , or tablet disintegration, on the dissolution profile has been widely presented in the literature. Since dissolution models assume that all the drug particles are available at time zero for dissolution, the disintegration time or capsule opening time should be removed from the observed dissolution data prior to fitting the dissolution rate. This can be achieved by subtracting the time needed for drug release from the observed dissolution time. If possible, models for capsule opening and tablet disintegration should be fitted to in vitro data and applied to in vivo data. It is also known that in vivo capsule opening, or in vivo tablet disintegration, , takes longer than the time observed during USP disintegration testing and would impact gastric residence in vivo. Method Artificial Effects: In addition to the intrinsic properties of the drug substance and drug product described above, the in vitro dissolution performance may be affected by artificial effects in the in vitro dissolution setup, which may not necessarily have relevance for in vivo dissolution. Such effects include in vitro sedimentation or coning and the interaction with components of the dissolution medium. In vitro sedimentation introduces a bias to the dissolution rate and extent and should be corrected prior to PBBM introduction. The solubility product of ionizable compounds in the presence of specific buffer salts and/or surfactants should be carefully considered (e.g., formation of less soluble lauryl sulfate salts in the presence of SLS or reduced hydration of Eudragit RS in the presence of chloride ions in the dissolution medium). In summary, a robust understanding of the experimental dissolution data is required to ensure the development of a meaningful dissolution model able to capture the in vivo performance in a mechanistic manner. To facilitate this process, the critical aspects to consider are summarized in , which may serve as a checklist in the context of in vitro data evaluation for the dissolution model development. 3.3.2.5 Q5: What Is the Appropriate Quality and Quantity of Data to Be Generated to Allow Dissolution Model Validation? The quality of data is defined by the evaluation of potential factors to consider which may introduce a bias to the dissolution measurement as shown in the check-list for in vitro data evaluation prior to dissolution model development , leading to the list of necessary input parameters needed for dissolution modeling . In terms of quantity, there is no definite number at this stage, but it seems that n = 3 different conditions covering the physiological pH range could be sufficient. Care should be taken to obtain adequate release profiles in each dissolution method (see Q1) and to favor dissolution methods where the main component/parameter in the dissolution medium/method influencing drug product dissolution is integrated. For example, for large particles or extended-release matrixes, dissolution data with different agitation rates often provide insight into the release mechanism. For drug substances that are sensitive to pH, covering the physiological pH range is typical. Finally, for drugs that are sensitive to the presence of surfactants in the medium, a comparison of dissolution profiles with synthetic and natural occurring surfactants is warranted. 3.4 BO Session D - Precipitation: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Christian Wagner (Merck Healthcare KGaA, Darmstadt, Germany) and was led by Poonam Delvadia (FDA) and Mark McAllister (Pfizer), with André Dallmann (Bayer) and Elizabeth Gray (FDA) as scribes. 3.4.1 Presentation: To Precipitate or Not to Precipitate, That Is the Question! Loosely adapted from Shakespeare’s Hamlet, pharmaceutical scientists have been asking this question for decades, because drug precipitation in the small intestine can affect the rate and/or extent of oral drug absorption. This, in turn, can contribute to PK variability and can jeopardize the efficacy of an orally administered drug. Thus, there is a huge need for predictive tools to assess the impact of potential drug precipitation on the absorption of orally administered drugs. , Drug precipitation typically occurs from a supersaturated state, i.e., when the solubility of the drug exceeds its thermodynamic solubility. Weakly basic drugs are especially susceptible to drug precipitation because their solubility is markedly higher in the (fasted) stomach than in the small intestine. Upon gastric emptying of dissolved drug into the small intestine, the drug’s solubility drops, and molecule clusters form, grow, and precipitate once a critical cluster size is reached (nucleation and growth theory). Besides weakly basic drugs, supersaturating formulations such as ASDs and self-(micro) emulsifying drug delivery systems (S(M)EDDSs) can also be subject to intestinal drug precipitation. Whether or not a drug precipitates thus depends on several drug, formulation, and physiological factors. In any case, the driver of drug precipitation is the reduction of free energy in the system. − Its complex nature underlines the need for tools that reliably predict luminal drug precipitation, allowing for the translation of results from the lab (in vitro) into the clinics (in vivo) via PBBM tools (in silico). During recent years, various in vitro precipitation assays have been developed. These assays can be applied throughout the development cycle of a drug, i.e., from early research through life-cycle management. The commonality of most of the in vitro assays is that they strive to simulate physiological conditions by transferring a drug solution or suspension from an artificial stomach (donor) into an artificial small intestine (acceptor) compartment. The concentration of dissolved drug can be measured by various techniques, such as liquid chromatography or in-line UV–vis. − On the one hand, small-scale assays are typically used to investigate the precipitation behavior of the drug in a typical preformulation setting, i.e., using small quantities of the drug substance , − On the other hand, large scale models typically use physiologically relevant gastric and intestinal fluid volumes, which allows for performance-testing of formulations. , − More advanced models, which aim at simulating the interplay between drug precipitation and absorption, have also been published. , − Of note, a drug can precipitate as crystalline or amorphous form(s), which, in turn, can impact the rate and extent of redissolution of the precipitate. Likewise, the particle size of the precipitate can also impact its redissolution kinetics. , A well-known example of amorphous precipitation is gefitinib, which was shown to precipitate in an amorphous state and then slowly recrystallize. Whenever possible, characterizing the solid state of the precipitated drug, testing for redissolution, and adapting the PBBM accordingly would be a viable approach. Despite significant advances during the past 20 years, all in vitro systems to predict drug precipitation remain highly artificial, as they are not capable of reflecting the complex nature of human anatomy and physiology in its totality. The comparably high number of in vitro precipitation assays described in the literature indicates a lack of harmonization/standardization, especially since the selection of a suitable in vitro precipitation model seems to be a case-by-case decision, depending on the drug and formulation properties. A “universal” in vitro model capable of simulating luminal drug precipitation for a wide variety of compounds and at various conditions (dose, prandial, or disease state, formulation) would increase confidence in in vitro-based precipitation predictions. In addition to in vitro precipitation assays, luminal sampling from volunteers or clinical PK data can also be used to deduce whether a drug may be prone to precipitation. , , For example, if PK data from a well-designed single ascending dose study indicate linearity in relevant PK parameters such as AUC, C max , and elimination (no flip-flop kinetics), the impact of precipitation on drug absorption becomes unlikely. In contrast, nonlinear AUC or C max , or a pronounced shift in t max , may indicate nonlinear absorption, potentially deriving from solubility/dissolution limitations and/or drug precipitation. Time-dependent effects, nonlinear clearance mechanisms, disease state (healthy volunteer vs patients), changes in dose and/or formulation, and other confounding factors should be taken into consideration when deducing precipitation characteristics from clinical PK data. In contrast to in vitro data, mechanistic insights into the precipitation process cannot be gained from in vivo data because in vivo data typically do not give mechanistic insights into drug precipitation. Therefore, parameter identification remains a potential issue when precipitation characteristics are deduced from clinical data. To translate insights from drug precipitation into a meaningful prediction and potentially extrapolate to untested scenarios, the results from an in vitro precipitation study (including solid state and redissolution characterization of the precipitate) or a clinical PK trial (including luminal aspiration studies) can be used to inform a PBBM. , , , , − This translational, integrative approach permits the prediction of luminal drug precipitation at various doses and prandial states and for different formulations. Commercially available PBBM tools typically offer two possibilities of applying precipitation kinetics to the simulations, i.e., by applying a simplistic precipitation rate constant or time, combined with supersaturation, or by applying a mechanistic nucleation and growth model. , , , The latter approach allows for the mechanistic simulation of drug precipitation by fitting nucleation and growth parameters to in vitro or in vivo data. From a scientific perspective, in vitro precipitation setups should be suited to extract nucleation and growth parameters for use as input for a PBBM. However, the low number of publications describing the application of software built-in mechanistic precipitation tools indicates that the advantage of applying these tools as part of a commercially available PBBM suite still needs to be demonstrated. “To precipitate, or not to precipitate” – this question remains unanswered, at least partly. As has been discussed in the scientific community previously, the results of this workshop also revealed that our currently available in vitro tools to predict drug precipitation are often lacking “universal” predictive power, because there is no in vitro tool currently available, which is capable of predicting drug precipitation (or the lack thereof) for a wide variety of drugs and formulations. Likewise, there are still significant knowledge gaps, for example, with respect to our understanding of the impact of GI hydrodynamics and transit rates (including the “Magenstraße”), distribution of fluid pockets, impact of intestinal mucus, and transporter effects on luminal drug precipitation. Understanding these properties would aid in developing improved in vitro precipitation setups and more predictive PBBM tools. PBBM tools should benefit from ongoing advances in scientific research and constantly be updated with state-of-the-art knowledge. Despite significant improvements during the past decades in terms of in vitro methodology to test for drug precipitation, computational and software capabilities to model it, and knowledge about the anatomy and physiology of the human GI tract (which, beside the drug properties itself, affect the rate and extent of drug precipitation), predicting drug precipitation is still associated with a high degree of uncertainty, especially for drugs with impaired absorption. For this purpose, a decision tree on how to test for drug precipitation and apply it to a PBBM was presented during the workshop . The decision tree is adapted based on recommendations from a previous publication and reflects the general workflow applied to precipitation predictions in PBBMs in one of the IQ working group’s member companies (Merck Healthcare KGaA, Darmstadt, Germany). As clinical PK data are thought to provide the highest evidence on impaired drug absorption, evoked by, e.g., drug precipitation, the starting point of the decision tree is the question of the availability of clinical PK data. The left side of the decision tree (“no clinical data available”) describes bottom-up in vitro methods to deduce precipitation parameters for the PBBM input. Given the lack of a “universal” precipitation assay, the decision tree does not recommend using a particular in vitro assay to predict drug precipitation. Instead, it leaves the discretion of the biopharmaceutical scientist to decide on a suitable assay. One key element of the decision tree is the recommendation to apply precipitation scenarios to the PBBM. For example, the modeler could apply a “no versus a moderate precipitation scenario” (in vitro setup indicates no or very modest precipitation) or a “moderate versus a high precipitation scenario” (in vitro setup indicates precipitation). This approach mitigates the uncertainties associated with many in vitro precipitation assays, particularly their tendency to overpredict drug precipitation. The right side (“clinical data available”) describes a top-down method for deducing precipitation kinetics, i.e., the analysis of clinical PK data. The key to reliably deduce precipitation parameters is the availability of high-quality PK data, e.g., from a dose escalation study, which would ideally be conducted in healthy volunteers. Other confounding factors, such as nonlinear clearance mechanisms or time-dependent effects, should be excluded. One drawback of the top-down approach is the lack of parameter identification (e.g., individual impact of drug dissolution, precipitation, and redissolution on the PK profile); i.e., this approach is a nonmechanistic one. The decision tree presented herein considers the above-mentioned uncertainties around the in vitro and in silico prediction of drug precipitation. It can be flexibly adapted based on specific needs and can be refined continuously based on future scientific advancements. Therefore, the decision tree should be understood as a practical tool rather than a strict “operating procedure”. 3.4.2 Discussion After the presentation, the audience was guided by Mark and Poonam to discuss the five highlighted questions below. 3.4.2.1 Q1: Which Limitations of Commonly Used in Vitro Precipitation Assays Based on Transfer Methodology Can Be Addressed by an Improved Experimental Design? The design of in vitro precipitation assays should be based on the intended application and what data are required; for example, is the assay being used to perform formulation ranking or for informing PBBM input? There was a debate around the criticality of integrating a permeability-like component within the in vitro precipitation assay, particularly for compounds with high permeability. As a general concept, it was suggested that the thoughtful inclusion of a well-designed permeability component (absorption compartment) in the in vitro dissolution assay would be expected to help with generating more accurate quantitative predictions and rank orders for formulations. However, it was also recognized that the practical limitations for modifying in vitro assays to accurately simulate in vivo permeability were significant. Biphasic dissolution assays that are designed in a two-stage manner (e.g., addition of the lipid phase and pH shift after 30 min to reflect the transfer from stomach to the upper intestine) were also considered by some participants as an improved method. It is also important to understand what the solid state of the precipitant is for modeling. The particle size (distribution) of the precipitate(s) should ideally be measured in vitro so that it can be included in a PBBM, along with the measurement of pH values and whether they have changed to account for these inputs in the model. It was suggested that precipitated material be isolated and dissolution measured to accurately characterize the redissolution performance. It was also suggested that two-phasic and/or transfer computational models can be used as a good approach when attempting to correlate in vitro and in vivo supersaturation concentrations. Another member in the audience from industry stated that different methodologies are used based on whether they are looking at the drug product or the drug substance. The totality of data obtained from different in vitro experiments should then be considered. Though it is always difficult to incorporate a permeability component with in vitro systems, a complex model with an absorptive component has been helpful. The audience seemed to agree that how you present a drug to an absorptive surface area in vitro is very important because in vitro modeling can overestimate concentrations at which precipitation occurs. For many compounds in developmental stages, though early precipitation data may have raised a red flag, usually those early precipitation risks are not as limiting as predicted by in vitro data; therefore, should we consider permeability to be a saver for some drugs that precipitate? This again stresses the importance of including an absorption compartment in the in vitro dissolution assay. Ultimately, while there are many different transfer models used to measure the rate of precipitation, there is not a one size fits all approach, as the complexity of the assay required depends upon the question (e.g., drug precipitation propensity, impact of formulation, etc.) that we are asking. 3.4.2.2 Q2: Can We Identify the Class of Compounds for Which the Need to Integrate a Permeation-Like Process in the Precipitation Assay Is Essential for Accurate Estimation of Precipitation, and What Are the Recommended Experimental Options for This? It was suggested to build a data set of molecules across the range of physicochemical space to define supersaturation and precipitation performance that could be used in verifying models. It was noted that a number of compounds had been studied during the IMI OrBiTo project and a recent review that summarizes the available human data from intubation for a large number of molecules could be a useful starting point for such a database. 3.4.2.3 Q3: What Are the Options/Best Practices for Characterizing (Or Predicting) Precipitated Material Attributes (Form, Particle Size, and Solubility) for Accurate Input to PBBM? Initially, an attendee in the audience stated that prior to looking into the software capabilities samples should be collected so that the solid state of the precipitate and its particle size can be determined and measured. Though many agreed, based on the responses from industry, this is not a common practice. Some industry representatives reported that precipitated material attributes are nowadays increasingly characterized, but concerns were raised about whether enough precipitated material could be obtained for analysis. However, drugs may precipitate as amorphous forms, which are known to exhibit higher solubility, or as crystalline forms that exhibit lower solubility. An example of gefitinib was discussed and shows that gefitinib precipitates in an amorphous form that converts to a crystalline form. This example underscores the importance of understanding the solid-state characteristics for modeling. Nevertheless, the question remains: What is the best approach (mechanistic or descriptive) given that there is no standard practice? Further discussion centered around redissolution, which can be used to back-calculate particle size. It was stated that this approach is easier than measuring the particle size. A series of experiments conducted with posaconazole were also discussed, as in vitro experiments using the transfer assay showed an aggregate structure that was not crystalline or amorphous. , More specifically, the obtained phase-separated species appeared to be metastable, reaching a plateau above the thermodynamic solubility but below the supersaturated state. The attributes of this phase-separated species could not be further elucidated. This observation challenges the current practice of in vitro to in vivo translation; can we assume from these studies that what happens in vitro translates to in vivo? As in vivo particles do not grow in an isolated medium, they might have attributes different from those of precipitates isolated from in vitro experiments. There was also some discussion about overpredicting precipitation, as ketoconazole precipitates strongly in vitro, but in vivo, it was determined that only about 10% of the dose precipitated. It was again stressed that a curated set of case examples with well understood in vivo behavior would be helpful to define parameters that need to be better characterized in vitro. 3.4.2.4 Q4: What Are the Best Practices for Modeling Precipitation under Physiologically Relevant Luminal Conditions–First Order Fixed Rate Constant/Mechanistic Nucleation and Growth Predictions in Dynamic pH/Fluid Volumes? The first approach brought up was a bottom-up approach, in which the kinetics observed in the in vitro experiment are modeled. Subsequently, the dissolution–precipitation model is integrated in a PBBM framework via IVIVE to simulate the behavior in vivo. This approach was preferred over a top-down approach, where precipitation kinetics are fitted to observed PK data. From a physical and mechanistic modeling perspective, it was considered valuable to separate processes involved in dissolution and precipitation from each other, measure them individually, and then combine all of the individual mechanisms in a model to obtain an improved outcome. A question arose regarding whether anyone has used the emptying half-life in modeling and then investigated variability? Similarly, it was emphasized that physiological variability needs to be accounted for in addition to the variability associated with the pharmaceutical performance of the delivery system in the PBBM. Given the extreme interindividual variability in parameters related to precipitation, population simulations will likely cover the whole range of precipitation constants. Norvir (ritonavir formulated as ASD tablet) was given as an example where interindividual variability should be considered. Additionally, in the case of a precipitation risk, consideration should be given to mitigate this risk through the use of precipitation inhibitors or by using a salt of the drug. The latter option might be an alternative to more complex bioenhancement systems like ASD formulations. One answer referenced tacrolimus (an ASD) in which the precipitation risk was mitigated through formulation; however, it should always come down to an understanding of the biopharmaceutics risk. 3.4.2.5 Q5: How Can Precipitation from Supersaturating Delivery Systems, Such as ASDs, Be Modeled? What Options Are Available to Account for Complex Speciation, Including Liquid–liquid Phase-Separated Nanodroplets? This is particularly challenging and something that requires further work due to the complexities that arise with the presence of polymer and surfactants, for example, which make prediction difficult. Mass transfer models should account for the mixed speciation of the drug. The consensus in the room was that it needs to be guided by the accurate in vitro performance of a supersaturating system. 3.5 BO Session E - Permeability: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Hans Lennernäs (Uppsala University) and was led by Christer Tannergren (AstraZeneca) and Rodrigo Cristofoletti (University of Florida), with Xiaojun Ren (Novartis) and Eleftheria Tsakalozou (FDA) as scribes. 3.5.1 Presentation 3.5.1.1 Introduction By understanding the permeability of a drug candidate in the GI tract, medicinal chemists and biopharmaceutical scientists are expected to be able to design efficacious and safe drug compounds. These new drug compounds together with improved knowledge of regional intestinal permeability will also allow them to optimize and develop pharmaceutical formulations with high oral bioavailability and less intra- and interindividual variability and to better control of the plasma concentration–time effect relationship. The investigation and optimization of intestinal permeability are among other key factors, such as potency, efficacy, and drug–drug interactions, that are crucial in the drug discovery and development processes of oral pharmaceutical products. Permeability plays a key role in determining the rate and extent of intestinal absorption of a drug. If a drug has poor permeability (BCS class III or IV), it may not be effectively transported into the bloodstream and could have a limited and highly variable therapeutic response. On the other hand, if a drug has high permeability and a poor pH-dependent solubility (BCS class II), the low and erratic rate and extent of absorption may be overcome with a sophisticated and innovative formulation design, such as ASD. This allows for the development of oral products with less variable plasma PK and more effective doses, which can improve patient compliance and overall treatment outcomes. , Determining the intestinal permeability of drug candidates has significantly contributed to reducing the attrition rates of drugs in development. Previously, about 40% of drug candidates were discarded due to poor ADME (absorption, distribution, metabolism, and excretion) properties. However, by focusing on understanding and optimizing permeability, this attrition rate was reduced to around 10%. The limited permeability observed 2–3 decades ago can be attributed to the fact that, during that time, a significant number of drug candidates targeted extracellular sites, and membrane permeation was not considered a crucial aspect of pharmacological discovery efforts. − Recent advancements in drug discovery and medicinal and biological chemistry have expanded the possibilities for developing oral drugs that were previously considered to have unfavorable physicochemical properties. These new modalities, with physicochemical properties beyond the rule of five, have opened up a broader range of options for formulating drugs that can be effectively absorbed across the intestinal barriers. − In addition, considering the permeability along the human GI tract is an essential step in the innovation and development of oral pharmaceutical products featuring new modalities and challenging physicochemical properties. − 3.5.1.2 Intestinal Permeability Models and Approaches Overall, the intestinal barrier is a complex system that plays a crucial role in maintaining a delicate balance between absorption and protection. It acts as a physical and immunological barrier to prevent the invasion of pathogens and the absorption of toxic substances. The small intestine, with its unique architecture and cell composition, is the major site of nutrient and drug absorption in the body. Intestinal mucosa is a dynamic physiological barrier that receives and reacts to neuroendocrine signals to maintain a harmonious interplay between absorptive permeability, protective barrier functions, and secretory functions. Regional differences along the GI tract, such as between the small and large intestine, can have significant implications for pharmaceutical development. It is important to consider these biopharmaceutical and physiological factors in the design of drugs to ensure their optimal delivery, absorption, and effectiveness. The intestinal epithelium, the fastest renewing tissue in human, is made up of multiple cell types with a microenvironment consisting of a dynamic multiparametric and three-dimensional (3D) architecture, making it particularly challenging to recreate in vitro. The intestinal tissue is organized in finger-like protrusions called villi and invaginations called crypts. Intestinal organoids, also known as enteroids, colonoids, or “mini-guts”, are three-dimensional structures derived from stem cells that recapitulate the architecture and function of the intestine. , Furthermore, combined recent advances in cellular biology and microfabrication technologies have led to the development of various bioengineered systems to model and provide more in vivo relevant investigations of the intestinal mucosal physiology and pathophysiology. These microfabricated in vitro models may constitute an alternative to current approaches for screening and biopharmaceutics evaluation, as well as provide insights into fundamental mechanisms governing intestinal homeostasis and pathologies. , It is important to evaluate drug substance solubility, as drugs must be dissolved prior to transport across the intestinal barriers. The mass transfer ( J ) of dissolved drug molecules across semipermeable intestinal barriers is strongly affected by the nature and functions of the intestinal mucosal barrier and especially epithelial barrier. Different transport mechanisms can be involved in the process, and more than one mechanism may be employed for a single drug molecule. The net permeation process for a drug occurs via passive transcellular (lipoidal) and paracellular diffusion and/or carrier-mediated transport in both the absorptive and secretory (efflux) directions to various extents. To accurately determine the permeability of a drug, it is necessary to quantify the concentration of the drug adjacent to the intestinal membrane. This depends on the local distribution model applied in the various permeability models. , A variety of in silico, in vitro, and in vivo permeability models are used in biopharmaceutical studies during all parts of the drug discovery/development process to predict and characterize human drug absorption. − The selected intestinal permeability model will need to reflect the intended use of the permeability estimate at different stages of the drug development process. Permeability models comprise simple simulations and in vitro systems with high-throughput capacity, which are typically used in early drug development to sort compounds. More complex models involving animals, humans, and/or PBBM are employed in the later stages of nonclinical or early clinical drug development. This is particularly crucial when more in vivo relevant predictions are essential for successful translational science and product development. For instance, it is obvious that regional permeability data plays a pivotal role in shaping decisions regarding the choice and design of modified release dosage forms. , , Human fraction dose absorbed (fa) and measured jejunum permeability can be thought of as potential prediction gold standards. − Intestinal catheters have been used for decades in physiology, nutrition, microbiology, PK, and biopharmaceutic research. Studies involving catheters of different lengths and sizes have significantly increased the knowledge regarding the function and regulation of various processes of the human GI tract. The gold-standard permeability values are those that are determined with GI devices after local single dose administration or perfusion of a certain intestinal segment. A review has compiled historical human intestinal P eff values of 80 substances from 61 clinical trials performed in all parts of the human intestinal tract. The investigated substances include drugs, monosaccharaides, amino acids, dipeptides, vitamins, steroids, bile acids, ions, fatty acids, and water. It is well-known that intestinal catheters that are intended to be placed in the more distal small intestine or even proximal colon are challenging to biopharmaceutical researchers and clinicians. − Single-pass perfusion of a certain region of rat intestine (in situ) is the best characterized and most thoroughly validated animal model for investigations of small and large intestinal permeability. A high correlation between human and rat small intestine ( R 2 = 0.8–0.95) was observed for drug intestinal permeability with both carrier-mediated absorption and passive diffusion mechanisms. Moderate correlation between the two species was also found for the expression levels of transporters in the duodenum, which provides evidence of a similarity in the molecular mechanisms of drug absorption. Transport properties (permeability) for different compounds were also highly correlated between rat and human when using rat intestinal specimens in the Ussing chamber model. In contrast, no correlation between rat and human intestine was found for the expression of metabolizing enzymes, which may adequately account for the well-established difference in drug metabolism and oral bioavailability in the two species. − 3.5.1.3 Immediate and Modified Release in the Design of the Oral Dosage Form Design and development of the most appropriate oral dosage form depend on biopharmaceutical properties, terminal half-life (i.e., dosing rate), and plasma exposure effect relationship for the drug. The fraction dose absorbed (fa) needs to be synchronized to intestinal permeability, dissolution rate, and regional intestinal transit for the final design of the dosage form. The small intestine is the major site of nutrient and drug absorption in the body, which is established with a characteristic 3D architecture and cell composition. It is recognized that regional differences exist along the GI tract regarding barrier functions, neuroendocrine processes, and immunological effects, which have a major impact on pharmaceutical development. Interestingly, a larger surface area of the intestinal lining is at a higher risk of being highly exposed by digestive enzymes, potential toxic xenobiotics, and luminal microbiota. Thus, it might be that mammals try to find an optimal balance between protection and service by having a small surface area that prevents extensive uptake and epithelial exposure to luminal content and simultaneously provides a large enough mucosal surface for optimal digestion and nutrient absorption. Quantitative geometrical data of the human GI system vary considerably, especially the surface area enlargement of the intestine due to folds, villi, and microvilli. The inner surface of the small intestine is grossly enlarged by folds, villi, and microvilli, and the large intestine mucosa does not have folds comparable to those of the plicae circularis, except in the rectum. It is claimed that the total surface area of the intestinal mucosa is about the size of a tennis court (260–300 m 2 ) with a reported value of 0.3 m 2 for the large intestine. It has also been claimed that the major part of orally administered drugs are absorbed in the jejunum/ileum, as those account for 99% of the total absorption surface. However, according to Fändriks and Helander in 2014 the small intestine represents about 92–93% of the total intestinal surface area, which leaves some surface area in the large intestine for drug absorption from oral modified release formulations. 3.5.1.4 Intestinal Transport Across Intestinal Barrier The permeation of a dissolved drug molecule across semipermeable biological barriers is dependent on the molecular properties of the drug, transport mechanism(s), drug concentration, and the nature and conditions of the barrier. The transport mechanisms for a drug molecule may include passive lipoidal and paracellular diffusion and/or carrier-mediated (CM) transport in both the absorptive and excretive directions. Recently, the CM transport route has been proposed to be the universal transport mechanism, with no impact from passive lipoidal diffusion. However, Hans Lennernäs indicated that the experimental evidence for this transporter-only theory is weak, and the opposing view that there is a coexistence between CM and passive transport processes is more probable. , CM transporters are primarily important for the absorptive transport of water-soluble nutrients, such as glucose, vitamins, and amino acids, where they enable uptake from, for instance, the intestinal lumen into the bloodstream. However, this transport mechanism might be important for some drug compounds, such as levodopa and valacyclovir, but is in general considered as relatively rare. , An investigational drug having a (net) in vitro efflux ratio (ER) higher than 2 is classified as an efflux transporter substrate, when any pH difference is considered in the applied in vitro model (e.g., Caco-2 cells or transfected cells overexpressing P-gp). , Rhodamine 123, digoxin, vinblastine, paclitaxel, and quinidine are often used as probe substrates for demonstrating the presence of the P-gp transporter. The ER for vinblastine, digoxin, cimetidine, and quinidine were 4.25, 5.41, 1.79, and 5.85, respectively. Despite being classified as an efflux transporter substrate, their fraction dose absorbed is 65% for cimetidine and >80% for the other three drugs. This again demonstrates that drugs with an identified ER higher than 2 need to be investigated in vivo since it has often been shown that there is no or limited in vivo P-gp efflux effect on the extent of absorption. , Paclitaxel has been reported to be a P-gp substrate and in recent in vitro (Caco-2 model) and in vivo PK studies in rats by using the specific P-gp and Breast Cancer Resistance Protein (BCRP) inhibitor encequidar. , Altogether these studies support that P-gp might have a quantitative effect when efflux ratio is extensive. However, the role of an efflux substrate remains unclear in many cases. For instance, a selective estrogen receptor degrader-antagonist was reported to have a high efflux (ER > 30), which was saturable and decreased significantly at concentrations at and above 30 μM (i.e., ER was <15 at concentrations ≥30 μM). The solubility was high in aqueous media (>900 μM), and the candidate had a high fraction absorbed in all species examined (fa ≥ 50–100%). Despite being a drug candidate with a high ER, it had favorable physicochemical properties that resulted in good oral bioavailability in several preclinical species and potent in vivo activity in a mouse xenograft model. The regional differences between the colon and the small intestine regarding the expression of efflux transporters and the tight junction may potentially also affect the rate and extent of colon absorption as well as the prediction performance in this investigation. However, it has previously been concluded that there is no indication that efflux-mediated transport limits colon absorption, which suggests that it is likely the intrinsic passive permeability that is the major determinant of the membrane transport in the colon. , This is further supported by recently established correlations between in vitro permeability and human colon absorption, where the in vitro assays mainly measure the passive drug transport. , Furthermore, as the main source for the estimated permeability in this investigation was the Caco-2 model, which is of colonic origin, it is likely that the well-known effect of narrower tight junctions in the colon was appropriately accounted for in the predictions. 3.5.1.5 Conclusions Regional human intestinal permeability was identified as one important factor for future intestinal permeability determinations in both in vitro and in vivo models. Especially human regional intestinal permeability is of importance for the validation of existing and improved bioengineered in vitro intestinal transport models. Determinations of in vivo colon permeability are of special urgency but are very difficult in humans. Novel GI capsule systems, GI devices with external control, and capsules connected to long GI-tube methodologies are useful in those projects. In vitro intestinal P app -values in the Ussing and 2D cell monolayer models need scaling and adjustment prior to use in PBBM. The choice of permeability model is important for the assessment of the effect of pharmaceutical excipients. Caco-2 cell monolayers have been shown to often overpredict the potential in vivo effects of pharmaceutical excipients, and this higher sensitivity is explained by the given multiple differences between the simple Caco-2 monolayer and human in vivo intestine with its additional features like its mucus layer and full neuroendocrine feedback systems. , − Future intestinal organoids and 3D bioengineered intestinal models might exhibit morphological and physiological features that resemble those of native intestinal mucosa. These more complex in vitro systems are promising but require extensive evaluation and validation prior to use in rational drug discovery and development and for regulatory decision-making. Encequidar and elacridar may be very useful tools to assess the effect of intestinal efflux mediated by P-gp and/or BCRP on the rate and extent of intestinal absorption. Biopharmaceutics has an exciting future with the development of novel GI devices for assessment in humans and animals, bioengineered in vitro systems mimicking in vivo, advanced modeling with molecular dynamic simulation and artificial neural network (ANN) in drug discovery, and extended use of more accurate PBBMs in all part of drug development. Model and knowledge development to predict the effective permeability of new and interesting challenging drug candidates beyond Lipinski’s rule of 5 with a molar mass above 700 and Log D > 5 will be an important part for any future successful drug development. − These novel ANN simulation tools for oral drugs may also be applied before synthesis and even potentially allow for optimization of relevant physicochemical properties of new molecules of interest. , 3.5.2 Discussion The main objective of this part of the session was to discuss best practices for the integration of permeability in PBBM. 3.5.2.1 Q1: What Are the Available Methods to Estimate Jejunal P eff and What Is the Rank Order between the Methods with Regard to Confidence in the P eff Estimation? The majority of the attendees stated that they use MDCK or Caco-2 cell systems to estimate jejunal P eff . PAMPA may be used at early stages of drug development according to the session participants. An in-house calibration curve is normally used for the in vitro to in vivo permeability extrapolation. A few participants used built-in calibration curves from commercially available software, such as GastroPlus or Simcyp. It was stated that, when a calibration curve is used, it should cover low, moderate, and high permeability compounds. To reduce interstudy or interlaboratory variability, a calibrator, or a compound with known in vivo permeability, is often utilized. On rare occasions, QSAR models have been used directly to estimate P eff . Finally, the participants shared that oral solution PK data can be used to optimize P eff . It was anecdotally agreed that the experimentally obtained measurements of P eff from in vitro assays are a measure of passive permeability. When there is a need for characterizing protein-mediated transport, transfected cell lines may be used. While for high passive permeability compounds, the impact of protein-mediated efflux may be limited, it is important to characterize the impact of efflux transporters for low passive permeability compounds, understanding the variability of experimentally obtained V max or K m . For lipophilic compounds or to address food effect, biorelevant media may be used. The value of the in situ permeability in a rat model was discussed in terms of challenges in extrapolation or experimental variability. Most regulators shared that Caco-2 data are most commonly reported in regulatory applications. Canadian and European regulatory agencies indicated that well-controlled in situ data may be accepted. Differences in how passive P eff and transporter kinetics are integrated into various software need to be considered. There was an agreement that the Caco-2 cell model performs well for high permeability compounds. It is important though to cross check across a variety of data sets and P eff measurements collected using different methodologies. 3.5.2.2 Q2: Confidence in P eff Estimation – Low vs High Permeability Compounds? Most participants agreed that there is a high degree of confidence in the estimated P eff for high permeability compounds, while the confidence in the estimated P eff for low to moderate permeability compounds was lower. Although no conclusions were made during the discussion regarding a cutoff value between high and low P eff , a P eff of 1.34 × 10 –4 cm/s, corresponding to the measured human jejunal P eff of metoprolol and a fraction absorbed in humans of 90%, has been used previously for this purpose. Similarly, minoxidil, with an observed human fraction absorbed of 85%, can be applied as a divider between high and low permeability. The group also acknowledged that the extensive interlaboratory variability in the measured in vitro permeability is a factor playing a role in the credibility of the final estimates of the human P eff , especially for low permeability compounds. Therefore, a reference data set for high and low permeability marker compounds established within each lab is beneficial. 3.5.2.3 Q3: How Do We Use in Vitro Permeability Data Generated in Biorelevant Media as Input? Biorelevant media such as FaSSIF and FeSSIF may improve the solubility of some compounds in the apical chamber, but micelle entrapment/binding may bias estimation of apparent permeability ( P app ) across monolayers. For example, Caco-2 P app of lipophilic compounds like danazol is inversely proportional to the concentration of bile salt in the donor chamber, whereas P app of more hydrophilic compounds was insensitive to the bile salts concentration. Careful consideration should be exercised when using P app data obtained in biorelevant media as input since it may represent a mixture of micelle-entrapment and permeability. Measuring free concentration in the donor chamber of the Transwell system or modeling drug-micelle binding and P app simultaneously may be helpful, but further studies are needed to access the benefits of either approach. Finally, when biorelevant media are used, pH in the mucus layer in vivo needs to be taken into consideration. Mucus pH approximates the upper gut pH. Therefore, considering the mucus layer pH and the composition of the lipids in the mucus in vivo versus in vitro may be key to more reliable estimations of P eff . 3.5.2.4 Q4: P app – P eff Correlation vs Fitting P eff to Observed Data–When to Do What? Several methodologies have emerged throughout the years to calculate gut permeability (effective permeability, P eff ) for orally administered drug products. Some of these methodologies, such as the Caco-2 in vitro system, have been initially developed to select candidates or inform “go-no go” decisions based on their permeability characteristics or to assess the need for in vivo testing. It was agreed that novel technologies such as PBBM and experimental data have been leveraged to generate in vivo predictions of the permeability in virtual populations. Accumulating knowledge in the field indicates that for high permeability compounds the Caco-2 in vitro approach appears to be of high confidence. In the absence of data collected in a Caco-2 in vitro system, a mathematical model (such as PBBM) may leverage appropriate clinical PK data sets, e.g., for a nonprecipitating oral solution to derive (estimate) a P eff value. The challenge with this approach is the type of observed data that is utilized for predicting (“fitting”) this parameter, which may include individual or mean PK profile data from an oral solution or any other dosage form for which drug release from the dosage form, and not permeation through the gut epithelium, is the rate limiting step. The use of individual level PK data may result in inflating the intersubject variability incorporated into an in silico model, while the use of an oral dosage form, other than oral solution, may lead to a parameter model identifiability issue. As such, leveraging in vitro permeability data collected in a Caco-2 system toward an initial “bottom-up” approach for P eff is advisable. Confirming the calculated P eff using informative clinical PK data is necessary. In the case where Caco-2 data do not result in satisfactory predictions, it may be acceptable to perform parameter optimization on P eff within the developed PBBM compared with the available clinical PK data. Gut metabolism, particularly relevant for high extraction drugs, was identified as a complicating factor for P eff characterization in the PBBM during the discussion. To handle model identifiability, for PBBM development purposes, applying an in vitro-in vivo extrapolation to inform a “bottom-up” approach in which gut metabolism is mechanistically predicted was suggested. Knowledge on the relative contribution of the gut metabolism toward the overall metabolism (clearance) was identified as critical toward accurately capturing the gut extraction ratio in a PBBM. It is expected that this recommended workflow will perform better for highly permeable compounds compared to low permeability compounds for which additional challenges may need to be addressed. 3.5.2.5 Q5: When Can Permeability Input into PBBM Be Based on Passive Permeability Alone, and When Is There a Need to Account for Uptake/Efflux Transporter Mediated Transport? Inclusion of transporter effects into an in silico model should be data driven. The decision should be based on the experimental results. Nonlinearity in clinical studies could be due to a transporter effect. Further exploration of the extent of the impact may be warranted. A well-controlled modeling and simulation approach may be accepted by regulatory agencies to investigate the impact of a transporter. , A clinical DDI study for transporter inhibition may eventually become warranted. 3.5.2.6 Q6: What Is the Best Practice to Account for Uptake/Efflux Transporter Mediated Transport? When a transporter effect on the clinical outcome for an orally administered drug is suspected, the extent of the transporter involvement on oral absorption and specifically gut permeability should be thoroughly and systematically investigated. Studies using in vitro and animal models have sometimes been used to determine the need for further in vivo studies in humans. The activity of the transporter protein can be characterized across a dose range of the victim drug and in the presence of well-established transporter activity modifiers within the context of in vitro or in vivo studies exploring potential drug–drug interactions and their clinical impact. These types of studies provide reliable estimates for parameters describing the saturable component of the absorption process governed by transporter proteins (Michaelis–Menten kinetics). These parameters include but are not limited to K i (inhibition constant), K I (inhibitor concentration causing half-maximal inactivation), k inact (maximal inactivation rate constant), K m (Michaelis–Menten constant), J max (maximal flux rate), and V max (maximal rate). Depending on the implementation of the saturable absorption process in a mechanistic PBBM, these parameters may serve as model inputs. With the application of validated, for their intended purpose, in vitro-in vivo extrapolations embedded into PBBMs, population predictions in virtual healthy subjects or patients may be generated. The session participants acknowledged the challenge associated with determining appropriate model inputs for the V max parameter, most probably because the in vitro collected V max values are typically highly dependent on the in vitro system utilized for data collection. Additional considerations regarding the regional expression of transporter proteins across the GI tract and the relative expression of these proteins are expected to inform key decisions on the development and validation of PBBMs that incorporate gut transporters. Guidelines and relevant literature are abundantly available for efflux transporters such as P glycoprotein (P-gp) and BCRP. These transporter proteins have been documented to limit bioavailability for orally administered drug substances by pumping them back into the gut lumen after they enter the enterocytes. However, there is a significant knowledge gap regarding uptake gut transporters and their relative contribution to oral absorption, which renders their incorporation into mechanistic in silico models challenging. 3.5.2.7 Q7: What Is the Confidence in Using the Estimated Jejunal P eff to Define the P eff in the Other Compartments? Based on available experimental data, there is low confidence in using the estimated jejunal P eff to define P eff in the other intestinal compartments. The relative values used for P eff in the jejunum versus colon may be extremely important when modeling ER and MR products. For low permeability compounds, jejunal P eff is considered to be higher than P eff in the colon. , This reflects the current general understanding within the community. Commercially available software currently utilizes the same value for P eff in both the jejunum and colon. This value is corrected for effective surface area corresponding to the different gut segments. In the absence of observed data, the group agreed that the correction is necessary but may be an overly simplistic approach. The attendees agreed that it is challenging to understand how the effective surface area in the gut/different regions is estimated and acknowledged that potential “pockets” in the gut are not considered. 3.5.2.8 Q8: How Can Colon P eff Be Estimated? Experimentally, a colon P eff can be obtained with local administration of the compounds of interest using either intubation or telemetric capsule techniques. Indirectly, when utilizing a modeling approach, the group shared that they would vary the P eff value used as the model input to get the clearance of the observed data. This is essentially a method where modeling fitting is involved. BO Session A - Solubility: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Deanna Mudie (Lonza) and was led by Evangelos Kotzagiorgis (EMA) and Claire Mackie (Janssen), with Tessa Carducci (Merck & Co., Inc., Rahway, NJ, USA) and Mario Cano-Vega (Amgen) as scribes. 3.1.1 Presentation Solubility is a fundamental driver of drug bioperformance. It is one of the fundamental properties that defines the BCS and is an important input to PBBM. Generally, it defines the maximum concentration of a drug in solution (e.g., in GI fluid) at equilibrium or a metastable, supersaturated state. A compound’s solubility is influenced by the interplay between the properties of the drug, the excipients within the formulation, and the GI fluid. This interplay affects the overall bulk solubility along the GI tract and the solid particle surface solubility, as well as solubilization in bile, fats, and formulation components. , Overall, solubility impacts a drug’s oral bioperformance via its influence on properties such as dissolution, precipitation, and maximum concentration in solution, i.e., the driving force for absorption. 3.1.1.1 Case Study 1: Impact of Excipients on Solubility and Dissolution Deanna Mudie discussed a case study showing how excipients can impact the solubility and dissolution rate of the BCS Class 2 drug substance, belinostat. Belinostat was formulated as three different spray dried amorphous solid dispersions (ASDs) using different dispersion polymers, one enteric (HPMCAS-M) and the other two neutral (PVP K30 and PVP VA64). Belinostat amorphous solubility was measured in the absence and presence of these polymers using an in vitro UV solvent shift test. When no polymer was present, amorphous solubility exceeded 1800 μg/mL in gastric medium (pH 2 HCl) and 2500 μg/mL in intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder). However, in the presence of polymer, the amorphous solubility was depressed at least 2- to 6-fold with the highest depression for PVP VA. When the extent of dissolution of ASDs was measured in a nonsink dissolution test in intestinal medium, the results matched the amorphous solubility values measured in the UV solvent shift test. However, the results differed when a transfer dissolution test was run with ASDs dissolved in a gastric medium (pH 2 HCl) at a nonsink dose, where concentrated intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder) was added after 30 min . In this case, while the PVP VA and PVP K30 ASDs reached the solubilities measured in the solvent shift test, solubility was significantly lower for the ASD made with HPMCAS-M. This was because these ASD particles aggregated in the gastric medium due to the low solubility of HPMCAS-M at acidic pH. In vitro dissolution profiles were incorporated into oral absorption simulations, using the Takano Z-factor method in GastroPlus. The HPMCAS-M ASD had the smallest z-factor and the largest calculated effective particle radius, reflecting the particle aggregation observed in the dissolution test. The PVP K30 ASD had the highest z-factor and driving force for dissolution. This mirrors an in vivo study in fasted beagles, where the PVP K30 ASD performed best . Furthermore, oral absorption simulations gave a good description of the concentration–time profiles. It was clear that the ASD dispersion polymer impacted the belinostat in vivo performance by attenuating amorphous solubility and driving effective particle size. High belinostat and polymer solubility in gastric medium maximized in vitro dissolution rate and in vivo AUC and C max . 3.1.1.2 Case Study 2: Impact of Excipients on Solubilization and Permeability In another example, Deanna Mudie showed how nanosized drug–polymer colloids can increase the driving force for absorption. This example was for itraconazole, a highly lipophilic BCS 2 weak base formulated as spray dried ASDs using different grades of HPMCAS. Itraconazole ASDs formed nanosized drug–polymer colloids in the intestinal donor medium of an in vitro membrane flux test, contributing to “dissolved” concentrations above the amorphous solubility . Concentration and size of drug–polymer colloids were determined using microcentrifugation, ultracentrifugation, and dynamic light scattering. More colloids were produced with the ASD made using hydrophilic HPMCAS-L than with the more hydrophobic HPMCAS-H. The marketed formulation, Sporanox, did not form drug–polymer colloids. Drug–polymer colloids increased the rate of permeation into the acceptor medium of the in vitro membrane flux test with the fastest rate seen for the highest colloid-forming, HPMCAS-L ASD. Faster permeation occurs because absorption of these formulations is limited by the unstirred water layer (UWL) adjacent to the membrane, and drug–polymer colloids increase effective drug diffusivity by acting as “shuttles” and helping to replenish free drug at the membrane surface. , This phenomenon was accounted for in oral absorption simulations by modifying the effective permeability ( P eff ) in GastroPlus to account for the higher P eff of colloid-forming formulations ( P eff, nano ). When these ASDs were administered to fasted rats, a trend similar to the in vitro experiments was observed, with the highest absorption rates corresponding with the highest colloid concentrations. Absorption simulations captured the concentration–time profiles well . However, drug–polymer colloids do not always improve the absorption. Drug–polymer colloids have the potential to improve absorption by increasing effective drug diffusivity when absorption is solubility-permeability-limited and permeation is UWL limited. Also, the colloid concentration must be large compared to the concentration of unbound plus micelle bound drug. The influence of drug–polymer colloids on permeation can be predicted by comparing calculated P eff, nano to P eff and running PSAs. For this case study, it was concluded that drug–polymer colloids in excess of amorphous solubility increased the absorption rate of itraconazole ASDs. Drug–polymer colloid concentration can be measured in vitro, and P eff, nano can be used to model the influence on in vivo performance. 3.1.1.3 Case Study 3: Impact of Dissolved Drug on Surface Solubility and Dissolution Deanna Mudie discussed how dissolved acidic or basic drugs can influence solid particle surface solubility and dissolution rate by modulating the surface pH. This example was for acalabrutinib, a BCS 2 weak base. Acalabrutinib free base shows a 43% reduction in AUC when taken with PPI due to reduced solubility and gastric dissolution at elevated gastric pH. A maleate salt form of acalabrutinib mitigates this effect. Surface pH can be estimated in vitro by measuring the pH of a saturated solution of the drug in the relevant medium. Results of measurements of acalabrutinib in HCl or NaOH were shown for an acalabrutinib ASD, the crystalline free base, and the maleate salt form. For the crystalline and amorphous free base, the pH of a saturated solution was higher than the starting bulk pH below the highest acalabrutinib p K a , with a larger pH change for the amorphous drug due to its higher intrinsic solubility. On the other hand, a saturated solution of the maleate salt form showed minimal pH change at low pH, but a decrease in slurry/surface pH above pH max . Modeling dissolution rate using bulk rather than surface pH carries a risk of misrepresenting dissolution rate for cases when surface pH differs from bulk medium pH. Surface solubility can be accounted for in oral absorption software by, for example, setting bulk pH equal to surface pH or inputting surface solubility rather than bulk solubility as a function of pH in, e.g., GastroPlus. Bottom-up oral absorption predictions of crystalline and amorphous acalabrutinib in fasted beagle dogs treated with either pentagastrin (gastric pH ∼ 1–2) or famotidine (gastric pH ∼ 6–7) provided good in vivo study prediction accuracy (absolute average fold error of AUC 0-inf < 1.6). However, not accounting for surface pH/solubility only modestly affected the simulations. A 15–20% difference in simulated AUC and C max was observed for the crystalline free base in pentagastrin-treated dogs, with no difference for the other simulations. This result is attributed to the rapid dissolution rate and solubility-limited absorption of acalabrutinib at bulk pH 2 and similarity between bulk and surface pH at pH 6. However, Pepin et al. modeled dissolution rate of crystalline acalabrutinib and found that use of bulk instead of surface solubility led to an overall 48% overprediction across the GI pH range, with prediction error highest at bulk pH 4.5 (up to 250%) where a difference between surface and bulk pH is observed and dissolution rate is much slower. Deanna Mudie discussed some criteria for predicting when a weakly basic or acidic drug or excipient would tend to modulate surface pH and dissolution. For example, the tendency for pH modulation increases as weak acid p K a decreases or weak base p K a increases, when intrinsic solubility increases, and when buffer capacity decreases. Published calculations using inputs such as p K a (s), intrinsic solubility, and buffer properties can be used to predict when surface pH is not equal to bulk pH. , In addition, surface pH changes are most likely to impact oral absorption simulations when dissolution is rate-limiting. PSAs were conducted to determine the sensitivity. For this case study, it was concluded that acalabrutinib can modulate surface pH, and the extent and direction of pH modulation depends on solid form type (e.g., amorphous, crystalline, salt). The extent to which drug surface pH modulation in vitro manifests as changes in AUC and C max in vivo and in silico depends on drug, formulation, and fluid properties. To end the talk, Deanna Mudie concluded that solubility drives oral bioperformance through dissolution, precipitation, and permeation and is influenced by the interplay between the drug, the formulation, and the GI fluids. Importantly, both solubility and bioperformance can be predicted using targeted in vitro tools combined with PBBM. 3.1.2 Discussion During breakout session A, participants discussed fundamental questions regarding the measurement and utilization of solubility data. 3.1.2.1 Q1: What Specifically Do Bulk and Surface Solubility Measurements Assess and Why Are These Assessments Crucial in the Context of PBPK/PBBM Modeling? Bulk drug solubility allows the calculation of drug amount dissolved at equilibrium if the volume of the medium is known, and its properties are not altered with time. Conversely, surface solubility is the drug solubility at the drug solid–liquid interface. While bulk solubility influences factors, such as solution-mediated precipitation, surface solubility drives drug dissolution and surface-mediated precipitation. For weakly acidic and basic drugs, surface pH may deviate from bulk pH when there is an acid–base reaction occurring at the drug liquid interface. , Consequently, measuring both bulk and surface solubility evaluations is important to accurately capture dissolution and precipitation rates in PBBMs. The choice of buffer for these measurements was highlighted as a key consideration and should align with the specific region of the GI tract being simulated. Furthermore, the session discussed the dynamic impact of excipients on the surface and bulk pH. For example, acidulants included in formulations gradually dissolve over time, and the extent of their effect depends on both time and concentration. This comprehensive discussion illuminated the critical role of understanding bulk and surface solubility and the contributing factors in making informed decisions during drug product development. 3.1.2.2 Q2: Which Media (e.g., FaSSIF V1 and V2) Should Be Chosen for Accurate Comparison to the in Vivo Situation, Considering Factors Such as the Presence and Concentration of Bile Salts, Fats in the Stomach, and Buffer pH? Participants agreed that there is not a one-size-fits-all “best” version of simulated GI media to choose for accurate prediction of in vivo conditions but that each may serve distinct purposes in modeling scenarios. , When measuring drug solubilities across different versions of FaSSIF and aspirated human intestinal fluids, researchers have found solubility values to vary between media. , In addition, no single medium captures the normal variation in these fluids. It is important to understand the properties and compositions of different types of simulated media and how they may interact with the drug product of interest to influence solubility, dissolution, and precipitation. For example, fasted state simulated intestinal fluid (FaSSIF) evolved to have a lower buffer capacity when moving from version 1 to version 3. Version 3 incorporates additional bile components (e.g., lecithin hydrolysis products and cholesterol) that are not found in versions 1 or 2. Factors such as buffer capacity and buffer species can impact surface solubility for acidic and basic drugs, and the type and concentration of bile components impact solubilization, especially for lipophilic drugs when nonionized at the medium pH. Some participants noted that FaSSIF v1 appears to be suitable for BCS classes 1 and 3 compounds, whereas FaSSIF v2 may better capture solubilities of some BCS class 2 and 4 compounds. Investigating solubility in the fed state can be challenging due to the dependence of media composition and resulting drug solubility on meal content. In addition, the inclusion of components such as fats in simulated gastric media requires careful preparation and complicated analytical techniques for assessing drug solubility. Nevertheless, gaps in the ability to model drug absorption in the fed state dictate the need to consider the impact of meal components on drug solubility. Several types of simulated fed state media, such as FeSSIF, FeSSGF, and FEDGAS (Biorelevant, London, UK) are available for this purpose. Considering these findings, the session concluded that it is crucial to deliberate whether customizing the buffer for specific applications or establishing standardized buffers is the most prudent approach. In any case, panelists emphasized the importance of providing precise and comprehensive descriptions when selecting buffers or biorelevant media. Given the limited experience in this field, it becomes imperative to offer supplementary information to facilitate a better understanding of the decisions made and their impact on the model. 3.1.2.3 Q3: When Is the Optimal Time to Measure the Solubility in Human Aspirates? Measuring drug solubility in human aspirates has not gained widespread adoption due to factors such as availability and cost; however, participants recognized its potential benefits, especially in improving modeling of poorly soluble, nonionizable lipophilic drugs. These drugs often exhibit wide variation in solubility as a function of micelle or vesicle composition, since simulated fluids (e.g., FaSSIF) lack many endogenous, bile- or vesicle-forming components. Participants reached a consensus that the benefit of using aspirated human fluid rather than simulated fluid is probably less important if the drug is ionized in the GI tract. In these cases, pH is the main driver of solubility. 3.1.2.4 Q4: For Weak Bases, Is There Added Value in Measuring Solubility Across a Broad pH Range, Specifically pH 8–9? If so, Which Media Should Be Considered? The participants agreed that the pH range over which solubility is measured is an essential factor to consider for weakly basic and weakly acidic drugs. This pH range should cover the GI physiology, i.e., from approximately 1–8. Experimental points should capture multiple degrees of ionization (e.g., 0% ionized, 50% ionized, 90% ionized) depending on the p K a . Measurements at pH values >8 (using NaOH for adjustment) may be needed to capture drug intrinsic solubility for weak bases (i.e., highest basic p K a + 2 pH units). One may also consider determining solubility in purified water and unbuffered media to determine the surface pH of the drug. For salts of weak acids and bases, the measurement of the solubility at and around pH max is recommended. It was emphasized that researchers should measure the medium pH prior to addition of drug and the pH of the final saturated solution. Both start and final pH values should be reported. The media composition should also be documented since they may comprise common ions with the drug substance, which could depress drug solubility, or lead to salt formation which could change the nature of the drug substance. 3.1.2.5 Q5: What Solubility Value Should Be Employed for Release from an Amorphous Solid Dispersion Containing a Polymer? During the session, participants acknowledged the challenges associated with developing PBBMs for dosage forms containing an amorphous solid dispersion (ASD). When modeling release from ASDs its important to understand whether dissolution is controlled by the drug, the polymer, or the combination of the two. When dissolution rate is driven by the drug, the amorphous (i.e., kinetic) solubility in the given medium is likely the appropriate solubility to employ for defining the rate of drug release. However, if the dissolving ASD contains both amorphous and crystalline drugs, then the solubility of the crystalline form in that medium and its impact on drug release may also need to be considered. When modeling drug precipitation and redissolution of ASDs, the amorphous solubility and solubilities of any crystalline forms to which the amorphous drug may precipitate should be considered. Some ASDs may undergo liquid–liquid phase separation (LLPS) and precipitate to amorphous nanodroplets, which may then redissolve according to the amorphous solubility. In other cases, amorphous drug may crystallize, and the solubility of the crystalline form will be an important input to account for drug precipitation and solubility limitations to redissolution along the GI tract. It was also emphasized by participants that measuring amorphous solubility in the presence of formulation excipients, such as polymers, is critical. For example, ASD polymers can either decrease amorphous solubility or increase it through the formation of drug–polymer colloids. , It is worth highlighting that the impact of these excipients varies as a function of the time and concentration. Participants also noted that, for ASDs, acquiring an in-depth understanding of drug speciation, with a particular focus on detecting drug–polymer colloid formation using different analytical techniques, may be necessary since the presence of these species can impact the driving force for drug permeation. These considerations are pivotal for the effective development of PBBMs for ASDs. In conclusion, the breakout session produced several significant takeaways. Participants in this session recognized the inherent complexity of drug solubility and its substantial influence on the development of PBBMs. The discussion brought to the forefront various critical topics, including distinctions between bulk, surface, thermodynamic, and kinetic solubility as well as points to consider during experimental measurements of these parameters. Given the intricate nature of these phenomena, it is strongly encouraged to include details regarding the rationale behind model development for solubility inputs for regulatory submissions. These should comprise the criteria for selecting and applying specific solubility parameters, choosing appropriate models, defining the experimental conditions for measuring solubility values, and highlighting the theoretical assumptions. Additionally, participants advised conducting parameter sensitivity analyses to ensure a robust and comprehensive understanding of the models utilized in drug product quality assessments. Important points to consider when measuring bulk and surface solubilities of crystalline and amorphous drugs and formulations are presented in the Supporting Information . Presentation Solubility is a fundamental driver of drug bioperformance. It is one of the fundamental properties that defines the BCS and is an important input to PBBM. Generally, it defines the maximum concentration of a drug in solution (e.g., in GI fluid) at equilibrium or a metastable, supersaturated state. A compound’s solubility is influenced by the interplay between the properties of the drug, the excipients within the formulation, and the GI fluid. This interplay affects the overall bulk solubility along the GI tract and the solid particle surface solubility, as well as solubilization in bile, fats, and formulation components. , Overall, solubility impacts a drug’s oral bioperformance via its influence on properties such as dissolution, precipitation, and maximum concentration in solution, i.e., the driving force for absorption. 3.1.1.1 Case Study 1: Impact of Excipients on Solubility and Dissolution Deanna Mudie discussed a case study showing how excipients can impact the solubility and dissolution rate of the BCS Class 2 drug substance, belinostat. Belinostat was formulated as three different spray dried amorphous solid dispersions (ASDs) using different dispersion polymers, one enteric (HPMCAS-M) and the other two neutral (PVP K30 and PVP VA64). Belinostat amorphous solubility was measured in the absence and presence of these polymers using an in vitro UV solvent shift test. When no polymer was present, amorphous solubility exceeded 1800 μg/mL in gastric medium (pH 2 HCl) and 2500 μg/mL in intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder). However, in the presence of polymer, the amorphous solubility was depressed at least 2- to 6-fold with the highest depression for PVP VA. When the extent of dissolution of ASDs was measured in a nonsink dissolution test in intestinal medium, the results matched the amorphous solubility values measured in the UV solvent shift test. However, the results differed when a transfer dissolution test was run with ASDs dissolved in a gastric medium (pH 2 HCl) at a nonsink dose, where concentrated intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder) was added after 30 min . In this case, while the PVP VA and PVP K30 ASDs reached the solubilities measured in the solvent shift test, solubility was significantly lower for the ASD made with HPMCAS-M. This was because these ASD particles aggregated in the gastric medium due to the low solubility of HPMCAS-M at acidic pH. In vitro dissolution profiles were incorporated into oral absorption simulations, using the Takano Z-factor method in GastroPlus. The HPMCAS-M ASD had the smallest z-factor and the largest calculated effective particle radius, reflecting the particle aggregation observed in the dissolution test. The PVP K30 ASD had the highest z-factor and driving force for dissolution. This mirrors an in vivo study in fasted beagles, where the PVP K30 ASD performed best . Furthermore, oral absorption simulations gave a good description of the concentration–time profiles. It was clear that the ASD dispersion polymer impacted the belinostat in vivo performance by attenuating amorphous solubility and driving effective particle size. High belinostat and polymer solubility in gastric medium maximized in vitro dissolution rate and in vivo AUC and C max . 3.1.1.2 Case Study 2: Impact of Excipients on Solubilization and Permeability In another example, Deanna Mudie showed how nanosized drug–polymer colloids can increase the driving force for absorption. This example was for itraconazole, a highly lipophilic BCS 2 weak base formulated as spray dried ASDs using different grades of HPMCAS. Itraconazole ASDs formed nanosized drug–polymer colloids in the intestinal donor medium of an in vitro membrane flux test, contributing to “dissolved” concentrations above the amorphous solubility . Concentration and size of drug–polymer colloids were determined using microcentrifugation, ultracentrifugation, and dynamic light scattering. More colloids were produced with the ASD made using hydrophilic HPMCAS-L than with the more hydrophobic HPMCAS-H. The marketed formulation, Sporanox, did not form drug–polymer colloids. Drug–polymer colloids increased the rate of permeation into the acceptor medium of the in vitro membrane flux test with the fastest rate seen for the highest colloid-forming, HPMCAS-L ASD. Faster permeation occurs because absorption of these formulations is limited by the unstirred water layer (UWL) adjacent to the membrane, and drug–polymer colloids increase effective drug diffusivity by acting as “shuttles” and helping to replenish free drug at the membrane surface. , This phenomenon was accounted for in oral absorption simulations by modifying the effective permeability ( P eff ) in GastroPlus to account for the higher P eff of colloid-forming formulations ( P eff, nano ). When these ASDs were administered to fasted rats, a trend similar to the in vitro experiments was observed, with the highest absorption rates corresponding with the highest colloid concentrations. Absorption simulations captured the concentration–time profiles well . However, drug–polymer colloids do not always improve the absorption. Drug–polymer colloids have the potential to improve absorption by increasing effective drug diffusivity when absorption is solubility-permeability-limited and permeation is UWL limited. Also, the colloid concentration must be large compared to the concentration of unbound plus micelle bound drug. The influence of drug–polymer colloids on permeation can be predicted by comparing calculated P eff, nano to P eff and running PSAs. For this case study, it was concluded that drug–polymer colloids in excess of amorphous solubility increased the absorption rate of itraconazole ASDs. Drug–polymer colloid concentration can be measured in vitro, and P eff, nano can be used to model the influence on in vivo performance. 3.1.1.3 Case Study 3: Impact of Dissolved Drug on Surface Solubility and Dissolution Deanna Mudie discussed how dissolved acidic or basic drugs can influence solid particle surface solubility and dissolution rate by modulating the surface pH. This example was for acalabrutinib, a BCS 2 weak base. Acalabrutinib free base shows a 43% reduction in AUC when taken with PPI due to reduced solubility and gastric dissolution at elevated gastric pH. A maleate salt form of acalabrutinib mitigates this effect. Surface pH can be estimated in vitro by measuring the pH of a saturated solution of the drug in the relevant medium. Results of measurements of acalabrutinib in HCl or NaOH were shown for an acalabrutinib ASD, the crystalline free base, and the maleate salt form. For the crystalline and amorphous free base, the pH of a saturated solution was higher than the starting bulk pH below the highest acalabrutinib p K a , with a larger pH change for the amorphous drug due to its higher intrinsic solubility. On the other hand, a saturated solution of the maleate salt form showed minimal pH change at low pH, but a decrease in slurry/surface pH above pH max . Modeling dissolution rate using bulk rather than surface pH carries a risk of misrepresenting dissolution rate for cases when surface pH differs from bulk medium pH. Surface solubility can be accounted for in oral absorption software by, for example, setting bulk pH equal to surface pH or inputting surface solubility rather than bulk solubility as a function of pH in, e.g., GastroPlus. Bottom-up oral absorption predictions of crystalline and amorphous acalabrutinib in fasted beagle dogs treated with either pentagastrin (gastric pH ∼ 1–2) or famotidine (gastric pH ∼ 6–7) provided good in vivo study prediction accuracy (absolute average fold error of AUC 0-inf < 1.6). However, not accounting for surface pH/solubility only modestly affected the simulations. A 15–20% difference in simulated AUC and C max was observed for the crystalline free base in pentagastrin-treated dogs, with no difference for the other simulations. This result is attributed to the rapid dissolution rate and solubility-limited absorption of acalabrutinib at bulk pH 2 and similarity between bulk and surface pH at pH 6. However, Pepin et al. modeled dissolution rate of crystalline acalabrutinib and found that use of bulk instead of surface solubility led to an overall 48% overprediction across the GI pH range, with prediction error highest at bulk pH 4.5 (up to 250%) where a difference between surface and bulk pH is observed and dissolution rate is much slower. Deanna Mudie discussed some criteria for predicting when a weakly basic or acidic drug or excipient would tend to modulate surface pH and dissolution. For example, the tendency for pH modulation increases as weak acid p K a decreases or weak base p K a increases, when intrinsic solubility increases, and when buffer capacity decreases. Published calculations using inputs such as p K a (s), intrinsic solubility, and buffer properties can be used to predict when surface pH is not equal to bulk pH. , In addition, surface pH changes are most likely to impact oral absorption simulations when dissolution is rate-limiting. PSAs were conducted to determine the sensitivity. For this case study, it was concluded that acalabrutinib can modulate surface pH, and the extent and direction of pH modulation depends on solid form type (e.g., amorphous, crystalline, salt). The extent to which drug surface pH modulation in vitro manifests as changes in AUC and C max in vivo and in silico depends on drug, formulation, and fluid properties. To end the talk, Deanna Mudie concluded that solubility drives oral bioperformance through dissolution, precipitation, and permeation and is influenced by the interplay between the drug, the formulation, and the GI fluids. Importantly, both solubility and bioperformance can be predicted using targeted in vitro tools combined with PBBM. Case Study 1: Impact of Excipients on Solubility and Dissolution Deanna Mudie discussed a case study showing how excipients can impact the solubility and dissolution rate of the BCS Class 2 drug substance, belinostat. Belinostat was formulated as three different spray dried amorphous solid dispersions (ASDs) using different dispersion polymers, one enteric (HPMCAS-M) and the other two neutral (PVP K30 and PVP VA64). Belinostat amorphous solubility was measured in the absence and presence of these polymers using an in vitro UV solvent shift test. When no polymer was present, amorphous solubility exceeded 1800 μg/mL in gastric medium (pH 2 HCl) and 2500 μg/mL in intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder). However, in the presence of polymer, the amorphous solubility was depressed at least 2- to 6-fold with the highest depression for PVP VA. When the extent of dissolution of ASDs was measured in a nonsink dissolution test in intestinal medium, the results matched the amorphous solubility values measured in the UV solvent shift test. However, the results differed when a transfer dissolution test was run with ASDs dissolved in a gastric medium (pH 2 HCl) at a nonsink dose, where concentrated intestinal medium (phosphate buffer at pH 6.5 containing FaSSIF powder) was added after 30 min . In this case, while the PVP VA and PVP K30 ASDs reached the solubilities measured in the solvent shift test, solubility was significantly lower for the ASD made with HPMCAS-M. This was because these ASD particles aggregated in the gastric medium due to the low solubility of HPMCAS-M at acidic pH. In vitro dissolution profiles were incorporated into oral absorption simulations, using the Takano Z-factor method in GastroPlus. The HPMCAS-M ASD had the smallest z-factor and the largest calculated effective particle radius, reflecting the particle aggregation observed in the dissolution test. The PVP K30 ASD had the highest z-factor and driving force for dissolution. This mirrors an in vivo study in fasted beagles, where the PVP K30 ASD performed best . Furthermore, oral absorption simulations gave a good description of the concentration–time profiles. It was clear that the ASD dispersion polymer impacted the belinostat in vivo performance by attenuating amorphous solubility and driving effective particle size. High belinostat and polymer solubility in gastric medium maximized in vitro dissolution rate and in vivo AUC and C max . Case Study 2: Impact of Excipients on Solubilization and Permeability In another example, Deanna Mudie showed how nanosized drug–polymer colloids can increase the driving force for absorption. This example was for itraconazole, a highly lipophilic BCS 2 weak base formulated as spray dried ASDs using different grades of HPMCAS. Itraconazole ASDs formed nanosized drug–polymer colloids in the intestinal donor medium of an in vitro membrane flux test, contributing to “dissolved” concentrations above the amorphous solubility . Concentration and size of drug–polymer colloids were determined using microcentrifugation, ultracentrifugation, and dynamic light scattering. More colloids were produced with the ASD made using hydrophilic HPMCAS-L than with the more hydrophobic HPMCAS-H. The marketed formulation, Sporanox, did not form drug–polymer colloids. Drug–polymer colloids increased the rate of permeation into the acceptor medium of the in vitro membrane flux test with the fastest rate seen for the highest colloid-forming, HPMCAS-L ASD. Faster permeation occurs because absorption of these formulations is limited by the unstirred water layer (UWL) adjacent to the membrane, and drug–polymer colloids increase effective drug diffusivity by acting as “shuttles” and helping to replenish free drug at the membrane surface. , This phenomenon was accounted for in oral absorption simulations by modifying the effective permeability ( P eff ) in GastroPlus to account for the higher P eff of colloid-forming formulations ( P eff, nano ). When these ASDs were administered to fasted rats, a trend similar to the in vitro experiments was observed, with the highest absorption rates corresponding with the highest colloid concentrations. Absorption simulations captured the concentration–time profiles well . However, drug–polymer colloids do not always improve the absorption. Drug–polymer colloids have the potential to improve absorption by increasing effective drug diffusivity when absorption is solubility-permeability-limited and permeation is UWL limited. Also, the colloid concentration must be large compared to the concentration of unbound plus micelle bound drug. The influence of drug–polymer colloids on permeation can be predicted by comparing calculated P eff, nano to P eff and running PSAs. For this case study, it was concluded that drug–polymer colloids in excess of amorphous solubility increased the absorption rate of itraconazole ASDs. Drug–polymer colloid concentration can be measured in vitro, and P eff, nano can be used to model the influence on in vivo performance. Case Study 3: Impact of Dissolved Drug on Surface Solubility and Dissolution Deanna Mudie discussed how dissolved acidic or basic drugs can influence solid particle surface solubility and dissolution rate by modulating the surface pH. This example was for acalabrutinib, a BCS 2 weak base. Acalabrutinib free base shows a 43% reduction in AUC when taken with PPI due to reduced solubility and gastric dissolution at elevated gastric pH. A maleate salt form of acalabrutinib mitigates this effect. Surface pH can be estimated in vitro by measuring the pH of a saturated solution of the drug in the relevant medium. Results of measurements of acalabrutinib in HCl or NaOH were shown for an acalabrutinib ASD, the crystalline free base, and the maleate salt form. For the crystalline and amorphous free base, the pH of a saturated solution was higher than the starting bulk pH below the highest acalabrutinib p K a , with a larger pH change for the amorphous drug due to its higher intrinsic solubility. On the other hand, a saturated solution of the maleate salt form showed minimal pH change at low pH, but a decrease in slurry/surface pH above pH max . Modeling dissolution rate using bulk rather than surface pH carries a risk of misrepresenting dissolution rate for cases when surface pH differs from bulk medium pH. Surface solubility can be accounted for in oral absorption software by, for example, setting bulk pH equal to surface pH or inputting surface solubility rather than bulk solubility as a function of pH in, e.g., GastroPlus. Bottom-up oral absorption predictions of crystalline and amorphous acalabrutinib in fasted beagle dogs treated with either pentagastrin (gastric pH ∼ 1–2) or famotidine (gastric pH ∼ 6–7) provided good in vivo study prediction accuracy (absolute average fold error of AUC 0-inf < 1.6). However, not accounting for surface pH/solubility only modestly affected the simulations. A 15–20% difference in simulated AUC and C max was observed for the crystalline free base in pentagastrin-treated dogs, with no difference for the other simulations. This result is attributed to the rapid dissolution rate and solubility-limited absorption of acalabrutinib at bulk pH 2 and similarity between bulk and surface pH at pH 6. However, Pepin et al. modeled dissolution rate of crystalline acalabrutinib and found that use of bulk instead of surface solubility led to an overall 48% overprediction across the GI pH range, with prediction error highest at bulk pH 4.5 (up to 250%) where a difference between surface and bulk pH is observed and dissolution rate is much slower. Deanna Mudie discussed some criteria for predicting when a weakly basic or acidic drug or excipient would tend to modulate surface pH and dissolution. For example, the tendency for pH modulation increases as weak acid p K a decreases or weak base p K a increases, when intrinsic solubility increases, and when buffer capacity decreases. Published calculations using inputs such as p K a (s), intrinsic solubility, and buffer properties can be used to predict when surface pH is not equal to bulk pH. , In addition, surface pH changes are most likely to impact oral absorption simulations when dissolution is rate-limiting. PSAs were conducted to determine the sensitivity. For this case study, it was concluded that acalabrutinib can modulate surface pH, and the extent and direction of pH modulation depends on solid form type (e.g., amorphous, crystalline, salt). The extent to which drug surface pH modulation in vitro manifests as changes in AUC and C max in vivo and in silico depends on drug, formulation, and fluid properties. To end the talk, Deanna Mudie concluded that solubility drives oral bioperformance through dissolution, precipitation, and permeation and is influenced by the interplay between the drug, the formulation, and the GI fluids. Importantly, both solubility and bioperformance can be predicted using targeted in vitro tools combined with PBBM. Discussion During breakout session A, participants discussed fundamental questions regarding the measurement and utilization of solubility data. 3.1.2.1 Q1: What Specifically Do Bulk and Surface Solubility Measurements Assess and Why Are These Assessments Crucial in the Context of PBPK/PBBM Modeling? Bulk drug solubility allows the calculation of drug amount dissolved at equilibrium if the volume of the medium is known, and its properties are not altered with time. Conversely, surface solubility is the drug solubility at the drug solid–liquid interface. While bulk solubility influences factors, such as solution-mediated precipitation, surface solubility drives drug dissolution and surface-mediated precipitation. For weakly acidic and basic drugs, surface pH may deviate from bulk pH when there is an acid–base reaction occurring at the drug liquid interface. , Consequently, measuring both bulk and surface solubility evaluations is important to accurately capture dissolution and precipitation rates in PBBMs. The choice of buffer for these measurements was highlighted as a key consideration and should align with the specific region of the GI tract being simulated. Furthermore, the session discussed the dynamic impact of excipients on the surface and bulk pH. For example, acidulants included in formulations gradually dissolve over time, and the extent of their effect depends on both time and concentration. This comprehensive discussion illuminated the critical role of understanding bulk and surface solubility and the contributing factors in making informed decisions during drug product development. 3.1.2.2 Q2: Which Media (e.g., FaSSIF V1 and V2) Should Be Chosen for Accurate Comparison to the in Vivo Situation, Considering Factors Such as the Presence and Concentration of Bile Salts, Fats in the Stomach, and Buffer pH? Participants agreed that there is not a one-size-fits-all “best” version of simulated GI media to choose for accurate prediction of in vivo conditions but that each may serve distinct purposes in modeling scenarios. , When measuring drug solubilities across different versions of FaSSIF and aspirated human intestinal fluids, researchers have found solubility values to vary between media. , In addition, no single medium captures the normal variation in these fluids. It is important to understand the properties and compositions of different types of simulated media and how they may interact with the drug product of interest to influence solubility, dissolution, and precipitation. For example, fasted state simulated intestinal fluid (FaSSIF) evolved to have a lower buffer capacity when moving from version 1 to version 3. Version 3 incorporates additional bile components (e.g., lecithin hydrolysis products and cholesterol) that are not found in versions 1 or 2. Factors such as buffer capacity and buffer species can impact surface solubility for acidic and basic drugs, and the type and concentration of bile components impact solubilization, especially for lipophilic drugs when nonionized at the medium pH. Some participants noted that FaSSIF v1 appears to be suitable for BCS classes 1 and 3 compounds, whereas FaSSIF v2 may better capture solubilities of some BCS class 2 and 4 compounds. Investigating solubility in the fed state can be challenging due to the dependence of media composition and resulting drug solubility on meal content. In addition, the inclusion of components such as fats in simulated gastric media requires careful preparation and complicated analytical techniques for assessing drug solubility. Nevertheless, gaps in the ability to model drug absorption in the fed state dictate the need to consider the impact of meal components on drug solubility. Several types of simulated fed state media, such as FeSSIF, FeSSGF, and FEDGAS (Biorelevant, London, UK) are available for this purpose. Considering these findings, the session concluded that it is crucial to deliberate whether customizing the buffer for specific applications or establishing standardized buffers is the most prudent approach. In any case, panelists emphasized the importance of providing precise and comprehensive descriptions when selecting buffers or biorelevant media. Given the limited experience in this field, it becomes imperative to offer supplementary information to facilitate a better understanding of the decisions made and their impact on the model. 3.1.2.3 Q3: When Is the Optimal Time to Measure the Solubility in Human Aspirates? Measuring drug solubility in human aspirates has not gained widespread adoption due to factors such as availability and cost; however, participants recognized its potential benefits, especially in improving modeling of poorly soluble, nonionizable lipophilic drugs. These drugs often exhibit wide variation in solubility as a function of micelle or vesicle composition, since simulated fluids (e.g., FaSSIF) lack many endogenous, bile- or vesicle-forming components. Participants reached a consensus that the benefit of using aspirated human fluid rather than simulated fluid is probably less important if the drug is ionized in the GI tract. In these cases, pH is the main driver of solubility. 3.1.2.4 Q4: For Weak Bases, Is There Added Value in Measuring Solubility Across a Broad pH Range, Specifically pH 8–9? If so, Which Media Should Be Considered? The participants agreed that the pH range over which solubility is measured is an essential factor to consider for weakly basic and weakly acidic drugs. This pH range should cover the GI physiology, i.e., from approximately 1–8. Experimental points should capture multiple degrees of ionization (e.g., 0% ionized, 50% ionized, 90% ionized) depending on the p K a . Measurements at pH values >8 (using NaOH for adjustment) may be needed to capture drug intrinsic solubility for weak bases (i.e., highest basic p K a + 2 pH units). One may also consider determining solubility in purified water and unbuffered media to determine the surface pH of the drug. For salts of weak acids and bases, the measurement of the solubility at and around pH max is recommended. It was emphasized that researchers should measure the medium pH prior to addition of drug and the pH of the final saturated solution. Both start and final pH values should be reported. The media composition should also be documented since they may comprise common ions with the drug substance, which could depress drug solubility, or lead to salt formation which could change the nature of the drug substance. 3.1.2.5 Q5: What Solubility Value Should Be Employed for Release from an Amorphous Solid Dispersion Containing a Polymer? During the session, participants acknowledged the challenges associated with developing PBBMs for dosage forms containing an amorphous solid dispersion (ASD). When modeling release from ASDs its important to understand whether dissolution is controlled by the drug, the polymer, or the combination of the two. When dissolution rate is driven by the drug, the amorphous (i.e., kinetic) solubility in the given medium is likely the appropriate solubility to employ for defining the rate of drug release. However, if the dissolving ASD contains both amorphous and crystalline drugs, then the solubility of the crystalline form in that medium and its impact on drug release may also need to be considered. When modeling drug precipitation and redissolution of ASDs, the amorphous solubility and solubilities of any crystalline forms to which the amorphous drug may precipitate should be considered. Some ASDs may undergo liquid–liquid phase separation (LLPS) and precipitate to amorphous nanodroplets, which may then redissolve according to the amorphous solubility. In other cases, amorphous drug may crystallize, and the solubility of the crystalline form will be an important input to account for drug precipitation and solubility limitations to redissolution along the GI tract. It was also emphasized by participants that measuring amorphous solubility in the presence of formulation excipients, such as polymers, is critical. For example, ASD polymers can either decrease amorphous solubility or increase it through the formation of drug–polymer colloids. , It is worth highlighting that the impact of these excipients varies as a function of the time and concentration. Participants also noted that, for ASDs, acquiring an in-depth understanding of drug speciation, with a particular focus on detecting drug–polymer colloid formation using different analytical techniques, may be necessary since the presence of these species can impact the driving force for drug permeation. These considerations are pivotal for the effective development of PBBMs for ASDs. In conclusion, the breakout session produced several significant takeaways. Participants in this session recognized the inherent complexity of drug solubility and its substantial influence on the development of PBBMs. The discussion brought to the forefront various critical topics, including distinctions between bulk, surface, thermodynamic, and kinetic solubility as well as points to consider during experimental measurements of these parameters. Given the intricate nature of these phenomena, it is strongly encouraged to include details regarding the rationale behind model development for solubility inputs for regulatory submissions. These should comprise the criteria for selecting and applying specific solubility parameters, choosing appropriate models, defining the experimental conditions for measuring solubility values, and highlighting the theoretical assumptions. Additionally, participants advised conducting parameter sensitivity analyses to ensure a robust and comprehensive understanding of the models utilized in drug product quality assessments. Important points to consider when measuring bulk and surface solubilities of crystalline and amorphous drugs and formulations are presented in the Supporting Information . Q1: What Specifically Do Bulk and Surface Solubility Measurements Assess and Why Are These Assessments Crucial in the Context of PBPK/PBBM Modeling? Bulk drug solubility allows the calculation of drug amount dissolved at equilibrium if the volume of the medium is known, and its properties are not altered with time. Conversely, surface solubility is the drug solubility at the drug solid–liquid interface. While bulk solubility influences factors, such as solution-mediated precipitation, surface solubility drives drug dissolution and surface-mediated precipitation. For weakly acidic and basic drugs, surface pH may deviate from bulk pH when there is an acid–base reaction occurring at the drug liquid interface. , Consequently, measuring both bulk and surface solubility evaluations is important to accurately capture dissolution and precipitation rates in PBBMs. The choice of buffer for these measurements was highlighted as a key consideration and should align with the specific region of the GI tract being simulated. Furthermore, the session discussed the dynamic impact of excipients on the surface and bulk pH. For example, acidulants included in formulations gradually dissolve over time, and the extent of their effect depends on both time and concentration. This comprehensive discussion illuminated the critical role of understanding bulk and surface solubility and the contributing factors in making informed decisions during drug product development. Q2: Which Media (e.g., FaSSIF V1 and V2) Should Be Chosen for Accurate Comparison to the in Vivo Situation, Considering Factors Such as the Presence and Concentration of Bile Salts, Fats in the Stomach, and Buffer pH? Participants agreed that there is not a one-size-fits-all “best” version of simulated GI media to choose for accurate prediction of in vivo conditions but that each may serve distinct purposes in modeling scenarios. , When measuring drug solubilities across different versions of FaSSIF and aspirated human intestinal fluids, researchers have found solubility values to vary between media. , In addition, no single medium captures the normal variation in these fluids. It is important to understand the properties and compositions of different types of simulated media and how they may interact with the drug product of interest to influence solubility, dissolution, and precipitation. For example, fasted state simulated intestinal fluid (FaSSIF) evolved to have a lower buffer capacity when moving from version 1 to version 3. Version 3 incorporates additional bile components (e.g., lecithin hydrolysis products and cholesterol) that are not found in versions 1 or 2. Factors such as buffer capacity and buffer species can impact surface solubility for acidic and basic drugs, and the type and concentration of bile components impact solubilization, especially for lipophilic drugs when nonionized at the medium pH. Some participants noted that FaSSIF v1 appears to be suitable for BCS classes 1 and 3 compounds, whereas FaSSIF v2 may better capture solubilities of some BCS class 2 and 4 compounds. Investigating solubility in the fed state can be challenging due to the dependence of media composition and resulting drug solubility on meal content. In addition, the inclusion of components such as fats in simulated gastric media requires careful preparation and complicated analytical techniques for assessing drug solubility. Nevertheless, gaps in the ability to model drug absorption in the fed state dictate the need to consider the impact of meal components on drug solubility. Several types of simulated fed state media, such as FeSSIF, FeSSGF, and FEDGAS (Biorelevant, London, UK) are available for this purpose. Considering these findings, the session concluded that it is crucial to deliberate whether customizing the buffer for specific applications or establishing standardized buffers is the most prudent approach. In any case, panelists emphasized the importance of providing precise and comprehensive descriptions when selecting buffers or biorelevant media. Given the limited experience in this field, it becomes imperative to offer supplementary information to facilitate a better understanding of the decisions made and their impact on the model. Q3: When Is the Optimal Time to Measure the Solubility in Human Aspirates? Measuring drug solubility in human aspirates has not gained widespread adoption due to factors such as availability and cost; however, participants recognized its potential benefits, especially in improving modeling of poorly soluble, nonionizable lipophilic drugs. These drugs often exhibit wide variation in solubility as a function of micelle or vesicle composition, since simulated fluids (e.g., FaSSIF) lack many endogenous, bile- or vesicle-forming components. Participants reached a consensus that the benefit of using aspirated human fluid rather than simulated fluid is probably less important if the drug is ionized in the GI tract. In these cases, pH is the main driver of solubility. Q4: For Weak Bases, Is There Added Value in Measuring Solubility Across a Broad pH Range, Specifically pH 8–9? If so, Which Media Should Be Considered? The participants agreed that the pH range over which solubility is measured is an essential factor to consider for weakly basic and weakly acidic drugs. This pH range should cover the GI physiology, i.e., from approximately 1–8. Experimental points should capture multiple degrees of ionization (e.g., 0% ionized, 50% ionized, 90% ionized) depending on the p K a . Measurements at pH values >8 (using NaOH for adjustment) may be needed to capture drug intrinsic solubility for weak bases (i.e., highest basic p K a + 2 pH units). One may also consider determining solubility in purified water and unbuffered media to determine the surface pH of the drug. For salts of weak acids and bases, the measurement of the solubility at and around pH max is recommended. It was emphasized that researchers should measure the medium pH prior to addition of drug and the pH of the final saturated solution. Both start and final pH values should be reported. The media composition should also be documented since they may comprise common ions with the drug substance, which could depress drug solubility, or lead to salt formation which could change the nature of the drug substance. Q5: What Solubility Value Should Be Employed for Release from an Amorphous Solid Dispersion Containing a Polymer? During the session, participants acknowledged the challenges associated with developing PBBMs for dosage forms containing an amorphous solid dispersion (ASD). When modeling release from ASDs its important to understand whether dissolution is controlled by the drug, the polymer, or the combination of the two. When dissolution rate is driven by the drug, the amorphous (i.e., kinetic) solubility in the given medium is likely the appropriate solubility to employ for defining the rate of drug release. However, if the dissolving ASD contains both amorphous and crystalline drugs, then the solubility of the crystalline form in that medium and its impact on drug release may also need to be considered. When modeling drug precipitation and redissolution of ASDs, the amorphous solubility and solubilities of any crystalline forms to which the amorphous drug may precipitate should be considered. Some ASDs may undergo liquid–liquid phase separation (LLPS) and precipitate to amorphous nanodroplets, which may then redissolve according to the amorphous solubility. In other cases, amorphous drug may crystallize, and the solubility of the crystalline form will be an important input to account for drug precipitation and solubility limitations to redissolution along the GI tract. It was also emphasized by participants that measuring amorphous solubility in the presence of formulation excipients, such as polymers, is critical. For example, ASD polymers can either decrease amorphous solubility or increase it through the formation of drug–polymer colloids. , It is worth highlighting that the impact of these excipients varies as a function of the time and concentration. Participants also noted that, for ASDs, acquiring an in-depth understanding of drug speciation, with a particular focus on detecting drug–polymer colloid formation using different analytical techniques, may be necessary since the presence of these species can impact the driving force for drug permeation. These considerations are pivotal for the effective development of PBBMs for ASDs. In conclusion, the breakout session produced several significant takeaways. Participants in this session recognized the inherent complexity of drug solubility and its substantial influence on the development of PBBMs. The discussion brought to the forefront various critical topics, including distinctions between bulk, surface, thermodynamic, and kinetic solubility as well as points to consider during experimental measurements of these parameters. Given the intricate nature of these phenomena, it is strongly encouraged to include details regarding the rationale behind model development for solubility inputs for regulatory submissions. These should comprise the criteria for selecting and applying specific solubility parameters, choosing appropriate models, defining the experimental conditions for measuring solubility values, and highlighting the theoretical assumptions. Additionally, participants advised conducting parameter sensitivity analyses to ensure a robust and comprehensive understanding of the models utilized in drug product quality assessments. Important points to consider when measuring bulk and surface solubilities of crystalline and amorphous drugs and formulations are presented in the Supporting Information . BO Session B - Dissolution Part 1: Development of a Biopredictive Dissolution Method This session began with speaker Raimar Loebenberg (University of Alberta) and was led by Paul Seo (FDA) and Nicoletta Fotaki (Bath University), with Ivy Song (Takeda) and Parnali Chatterjee (FDA) as scribes. 3.2.1 Presentation A typical approach for developing biopredictive dissolution methods for oral drug products is to first classify the molecule of interest according to the BCS and its appropriate subclass depending on the molecule’s functional groups. The next steps involve the choice of dissolution medium and dissolution method and their purpose. For example, a dissolution method used for quality control might be composed of pharmacopeial elements while a biopredictive method can use scientifically relevant setups and media mimicking different GI tract environments (e.g., biorelevant media and the Artificial Stomach and Duodenum (AS&D) apparatus). Another important consideration is the mechanism governing bioavailability by either permeability or dissolution-controlled absorption. If the absorption is permeability-controlled, a minimum dissolution acceptance criterion is desired. Faster dissolution will not change the rate and extent of absorption. This is different if the process is dissolution controlled. Here, any change in drug release will alter the rate of absorption. Currently, there is unfortunately no universal dissolution medium available that can be used for all drugs. The following examples highlight which media and dissolution methods might be useful in the development of biopredictive dissolution methods. 3.2.1.1 Example 1: Permeability-Controlled Absorption Etoricoxib is a weak base and is classified as a BCS II drug substance. A study by Okumu et al. showed that, if a transfer model from the acidic stomach conditions into FaSSIF was used, the drug solubility was increased in the simulated intestinal fluid compared to its equilibrium solubility. Essentially, a supersaturated drug solution was formed. Then, a flow-through cell combined with a perfusion protocol mimicking the stomach and the different small intestinal segments was used and a dissolution profile was generated. When this profile was used in simulation software, the observed clinical PK data were predicted with a better fit compared to USP type dissolution profiles. Furthermore, a comparison between a solution and the physiologically mimicking flow-through protocol showed that both resulted in superimposable predictions of the PK profiles. The study concluded that, if the drug is fully dissolved in the stomach, it can form a supersaturated solution in the intestine and behaves like a BCS class I drug. Therefore, the AS&D apparatus may be more appropriate for such BCS IIb drug molecules. 3.2.1.2 Example 2: Dissolution-Controlled Absorption Montelukast sodium is a highly lipophilic drug with acid and basic functional groups. It is a BCS II/IV drug substance. A comparison between dissolution profiles from a USP type 2 apparatus with biorelevant media versus a flow-through protocol using physiologically adapted conditions showed significant differences. In the flow-through cell, the drug release was slower in the first 90 min compared to the USP type test. However, when the data were used in GastroPlus, the flow-through data matched the observed clinical data better than when other dissolution profiles were used as input. An alternative apparatus to the flow-through cell is based on the AS&D apparatus with more compartments. This method is also known as in vivo Predictive Dissolution (iPD). 3.2.1.3 Example 3: Lysosomal Trapping Lysosomal trapping is a potential mechanism to explain slow availability of lipophilic weak bases that otherwise are expected to rapidly appear in the postabsorptive systemic circulation. Predictability of lysosomal trapping is not well developed, although recent efforts aim to standardize testing for lysosomal trapping. Lysosomes are enzyme filled vesicles in the cytoplasm that maintain a low pH inside. A weak base such as dextromethorphan is highly lipophilic at the pH inside of an enterocyte. When the molecule crosses the lipophilic membrane of the lysosome, it finds itself at a much lower pH (4.5–5.5). Here, its hydrophilicity significantly increases due to the drop in pH. Due to this shift in its lipophilic properties, the molecule now needs much longer to exit the lysosome. This is a potential reason it takes more than 16 h for the drug to appear completely in the systemic circulation. Based on simulations, the drug is predicted to completely dissolve in the GI tract and exhibit good permeability. The fraction of the dose absorbed into the enterocytes is about 100% within 2 h. The observed time lapse in the appearance in the systemic circulation is likely due to lysosomal trapping. For drugs such as dextromethorphan, there is a lag time between the fraction of the dose absorbed into the enterocyte and the drug plasma levels. Setting dissolution specifications on the fraction dose absorbed into the enterocyte rather than using drug plasma levels would be beneficial. Recently, an artificial lysosomal fluid and a side-by-side diffusion cell method were developed which can be used to screen for the tendency of drugs to be trapped by lysosomes. 3.2.1.4 Example 4: Enteric Coated Dosage Forms Literature is full of reports that enteric coated dosage forms are failing in vivo. In vitro dissolution testing according to the pharmacopeias uses a two-stage approach in which a dosage form is first tested in acid and then in pH 6.8 phosphate buffer. However, if low buffer capacity carbonate buffer is used instead of phosphate buffer, then the dissolution behavior dramatically changes, and depending on the carbonate concentration, the opening of the enteric coat is delayed. Another in vitro study showed that acidic and basic drugs also impact the delay of the coat opening in the carbonate buffer. Acidic drugs delayed the opening process, while basic drugs increased the coat opening. In low carbonate buffer, the coat opening was much slower compared to phosphate buffer. This was also shown for a failed bioequivalence study of pantoprazole. The dissolutions of the test and reference products were similar in phosphate buffer but differed significantly in carbonate buffer. Thus, carbonate buffers or other surrogates are useful when developing enteric coated dosage forms. 3.2.1.5 Example 5: Biphasic Dissolution Biphasic dissolution uses an organic layer on top of an aqueous dissolution medium as a sink for the lipophilic drug molecules. The test can be combined with a flow-through cell. In the present study, low buffer capacity (5 mmol) and low volumes (200 mL) were compared with regular strength phosphate buffer and 900 mL. Test tablets containing ibuprofen, which were made by direct compression or granulation using different excipients, were investigated. The results showed that low buffer capacity and low immersion medium volumes have the best ability to detect differences in the manufacturing processes and formulations. Furthermore, organic sinks could allow for a rebound in aqueous buffer pH after dissolved drugs, which initially caused a drop in the buffer pH due to their acidic nature, partition into the organic layer. 3.2.1.6 Example 6: Lipid Dissolution The volume of the lymphatic system is larger than that of the vascular system. However, not much attention is given to this compartment in the context of PBBM. Today, many hydrophobic drugs are formulated into lipid drug delivery systems. Long-chain lipids can increase the lymphatic uptake of hydrophobic drugs. This occurs inside the enterocyte. Here, triglycerides and phospholipids are assembled into chylomicrons. Lipophilic drugs can be loaded into the chylomicrons and exit the enterocyte via the lymphatic pathway. An artificial lymphatic fluid was developed and tested regarding its sensitivity to lymphatic inhibition and enhancement uptake. In a study similar to that of biphasic dissolution, a lymphatic compartment was added to a dissolution vessel. Three commercially available drug products containing terbinafine were tested in a USP type vessel and a flow-through cell. The aqueous dissolution of one product was significantly different from that of the other two products. This might be due to excipient differences in the formulations. However, the three products also showed differences in the accumulation of the drug in the lymphatic compartment. This new method is a promising approach to assessing formulations for their lymphatic uptake potential. The model might contribute to in vitro bioequivalence guidelines for lymphotropic formulations. 3.2.1.7 Conclusions First and foremost, the development of a dissolution method is driven by its purpose. When the development of a biorelevant, biopredictive dissolution method is the goal, the following may be considered: Flow-through cells and transfer-models are useful for dynamic dissolution protocols; small volumes and low buffer concentrations could be considered to mimic the physiological environments in the GI tract; carbonate buffers or suitable surrogates are helpful when evaluating enteric coated formulations; biphasic dissolution is an important tool to mimic the GI environment with dissolution and absorption occurring in parallel; and lipid dissolution is a promising approach to assess excipient effects for lymphotropic drugs. 3.2.2 Discussion This breakout session expanded and continued the discussions of the Hot Topic B on “Best Practices for Development of Biopredictive Dissolution Methods” as input into PBBM by taking into consideration the following questions. 3.2.2.1 Q1: When Biorelevant Dissolution Methods (e.g., Multicompartmental) Are Necessary, What Is the Best Way to Use These Methods? Developing a dissolution method should be dependent on its intended use, i.e., whether the method would be used for quality control purposes or for PBBM. For example, for screening for precipitation of weak bases, two-stage tests or transfer models can be useful. Biorelevant dissolution methods mimic biological fluids and physiology and may be developed solely to support PBBM, with no link to the QC dissolution method. In this case, the biopredictive nature of the biorelevant method is verified through the PBBM. 3.2.2.2 Q2: How Many Different Experimental Conditions Should Be Used for a Single Batch? There is no fixed number of experimental conditions that should be used to develop a biopredictive dissolution method. However, relevant sets of experiments could be conducted taking into consideration GI physiology, bile salts, buffer capacity, physicochemical properties of the DS, product design, and release mechanisms to develop biopredictive dissolution methods as input for PBBM. 3.2.2.3 Q3: What Are the Pitfalls of Dissolution (e.g., Degradation, Mixture of Polymorphs, and Precipitation) to Be Careful about and How to Deal with It? Precipitation of drugs is an important consideration in developing a dissolution method. To study the effect of drug precipitation during dissolution testing, transfer experiments are often conducted to estimate the precipitation times as input into PBBM to determine the effect on the bioavailability. 3.2.2.4 Q4: How Do You Separate Artifacts of the Dissolution Test and Its Significance (or Nonsignificance) on in Vivo Response (e.g., Coning Is Often a Dissolution Issue, But Is Minimally a Concern in Vivo)? Sometimes multiple experiments are conducted to address dissolution artifacts such as coning, cross-linking in capsules, etc. The use of Apex vessels (previously known as PEAK vessels) to address coning is gaining regulatory acceptance; however, generating as much data as possible early in the product development to address these issues and determine if the developed dissolution method is biopredictive by conducting a PK study is often critical. 3.2.2.5 Q5: How Should Functional Excipient Effects Be Investigated? What Are the Appropriate Methods and How Should Dissolution Methods Be Developed to Evaluate Excipient Effects? Dissolution methods should take into consideration the effect of key/functional excipients, such as the impact of excipients on bulk vs surface pH. Excipients can alter drug release and absorption; therefore, evaluating the effect of functional excipients early on is crucial. Conducting a pilot in vivo PK study when an important functional excipient is present in the formulation may provide utility when building a dissolution safe space. 3.2.2.6 Q6: Depending on DS and DP Properties, What Level of Variation of Critical Biopharmaceutics Attributes (CBA) Is Needed to Demonstrate Discrimination and a Biopredictive Nature for the Dissolution Method? Depending on the product design, release mechanism, >10% variations in functional excipients, and process parameters of the final formulation could be used to demonstrate the discriminating ability of the biopredictive/QC dissolution method and their impact on the bioavailability of the drug product (especially for basic drugs that have pH modifiers and enteric coatings). Presentation A typical approach for developing biopredictive dissolution methods for oral drug products is to first classify the molecule of interest according to the BCS and its appropriate subclass depending on the molecule’s functional groups. The next steps involve the choice of dissolution medium and dissolution method and their purpose. For example, a dissolution method used for quality control might be composed of pharmacopeial elements while a biopredictive method can use scientifically relevant setups and media mimicking different GI tract environments (e.g., biorelevant media and the Artificial Stomach and Duodenum (AS&D) apparatus). Another important consideration is the mechanism governing bioavailability by either permeability or dissolution-controlled absorption. If the absorption is permeability-controlled, a minimum dissolution acceptance criterion is desired. Faster dissolution will not change the rate and extent of absorption. This is different if the process is dissolution controlled. Here, any change in drug release will alter the rate of absorption. Currently, there is unfortunately no universal dissolution medium available that can be used for all drugs. The following examples highlight which media and dissolution methods might be useful in the development of biopredictive dissolution methods. 3.2.1.1 Example 1: Permeability-Controlled Absorption Etoricoxib is a weak base and is classified as a BCS II drug substance. A study by Okumu et al. showed that, if a transfer model from the acidic stomach conditions into FaSSIF was used, the drug solubility was increased in the simulated intestinal fluid compared to its equilibrium solubility. Essentially, a supersaturated drug solution was formed. Then, a flow-through cell combined with a perfusion protocol mimicking the stomach and the different small intestinal segments was used and a dissolution profile was generated. When this profile was used in simulation software, the observed clinical PK data were predicted with a better fit compared to USP type dissolution profiles. Furthermore, a comparison between a solution and the physiologically mimicking flow-through protocol showed that both resulted in superimposable predictions of the PK profiles. The study concluded that, if the drug is fully dissolved in the stomach, it can form a supersaturated solution in the intestine and behaves like a BCS class I drug. Therefore, the AS&D apparatus may be more appropriate for such BCS IIb drug molecules. 3.2.1.2 Example 2: Dissolution-Controlled Absorption Montelukast sodium is a highly lipophilic drug with acid and basic functional groups. It is a BCS II/IV drug substance. A comparison between dissolution profiles from a USP type 2 apparatus with biorelevant media versus a flow-through protocol using physiologically adapted conditions showed significant differences. In the flow-through cell, the drug release was slower in the first 90 min compared to the USP type test. However, when the data were used in GastroPlus, the flow-through data matched the observed clinical data better than when other dissolution profiles were used as input. An alternative apparatus to the flow-through cell is based on the AS&D apparatus with more compartments. This method is also known as in vivo Predictive Dissolution (iPD). 3.2.1.3 Example 3: Lysosomal Trapping Lysosomal trapping is a potential mechanism to explain slow availability of lipophilic weak bases that otherwise are expected to rapidly appear in the postabsorptive systemic circulation. Predictability of lysosomal trapping is not well developed, although recent efforts aim to standardize testing for lysosomal trapping. Lysosomes are enzyme filled vesicles in the cytoplasm that maintain a low pH inside. A weak base such as dextromethorphan is highly lipophilic at the pH inside of an enterocyte. When the molecule crosses the lipophilic membrane of the lysosome, it finds itself at a much lower pH (4.5–5.5). Here, its hydrophilicity significantly increases due to the drop in pH. Due to this shift in its lipophilic properties, the molecule now needs much longer to exit the lysosome. This is a potential reason it takes more than 16 h for the drug to appear completely in the systemic circulation. Based on simulations, the drug is predicted to completely dissolve in the GI tract and exhibit good permeability. The fraction of the dose absorbed into the enterocytes is about 100% within 2 h. The observed time lapse in the appearance in the systemic circulation is likely due to lysosomal trapping. For drugs such as dextromethorphan, there is a lag time between the fraction of the dose absorbed into the enterocyte and the drug plasma levels. Setting dissolution specifications on the fraction dose absorbed into the enterocyte rather than using drug plasma levels would be beneficial. Recently, an artificial lysosomal fluid and a side-by-side diffusion cell method were developed which can be used to screen for the tendency of drugs to be trapped by lysosomes. 3.2.1.4 Example 4: Enteric Coated Dosage Forms Literature is full of reports that enteric coated dosage forms are failing in vivo. In vitro dissolution testing according to the pharmacopeias uses a two-stage approach in which a dosage form is first tested in acid and then in pH 6.8 phosphate buffer. However, if low buffer capacity carbonate buffer is used instead of phosphate buffer, then the dissolution behavior dramatically changes, and depending on the carbonate concentration, the opening of the enteric coat is delayed. Another in vitro study showed that acidic and basic drugs also impact the delay of the coat opening in the carbonate buffer. Acidic drugs delayed the opening process, while basic drugs increased the coat opening. In low carbonate buffer, the coat opening was much slower compared to phosphate buffer. This was also shown for a failed bioequivalence study of pantoprazole. The dissolutions of the test and reference products were similar in phosphate buffer but differed significantly in carbonate buffer. Thus, carbonate buffers or other surrogates are useful when developing enteric coated dosage forms. 3.2.1.5 Example 5: Biphasic Dissolution Biphasic dissolution uses an organic layer on top of an aqueous dissolution medium as a sink for the lipophilic drug molecules. The test can be combined with a flow-through cell. In the present study, low buffer capacity (5 mmol) and low volumes (200 mL) were compared with regular strength phosphate buffer and 900 mL. Test tablets containing ibuprofen, which were made by direct compression or granulation using different excipients, were investigated. The results showed that low buffer capacity and low immersion medium volumes have the best ability to detect differences in the manufacturing processes and formulations. Furthermore, organic sinks could allow for a rebound in aqueous buffer pH after dissolved drugs, which initially caused a drop in the buffer pH due to their acidic nature, partition into the organic layer. 3.2.1.6 Example 6: Lipid Dissolution The volume of the lymphatic system is larger than that of the vascular system. However, not much attention is given to this compartment in the context of PBBM. Today, many hydrophobic drugs are formulated into lipid drug delivery systems. Long-chain lipids can increase the lymphatic uptake of hydrophobic drugs. This occurs inside the enterocyte. Here, triglycerides and phospholipids are assembled into chylomicrons. Lipophilic drugs can be loaded into the chylomicrons and exit the enterocyte via the lymphatic pathway. An artificial lymphatic fluid was developed and tested regarding its sensitivity to lymphatic inhibition and enhancement uptake. In a study similar to that of biphasic dissolution, a lymphatic compartment was added to a dissolution vessel. Three commercially available drug products containing terbinafine were tested in a USP type vessel and a flow-through cell. The aqueous dissolution of one product was significantly different from that of the other two products. This might be due to excipient differences in the formulations. However, the three products also showed differences in the accumulation of the drug in the lymphatic compartment. This new method is a promising approach to assessing formulations for their lymphatic uptake potential. The model might contribute to in vitro bioequivalence guidelines for lymphotropic formulations. 3.2.1.7 Conclusions First and foremost, the development of a dissolution method is driven by its purpose. When the development of a biorelevant, biopredictive dissolution method is the goal, the following may be considered: Flow-through cells and transfer-models are useful for dynamic dissolution protocols; small volumes and low buffer concentrations could be considered to mimic the physiological environments in the GI tract; carbonate buffers or suitable surrogates are helpful when evaluating enteric coated formulations; biphasic dissolution is an important tool to mimic the GI environment with dissolution and absorption occurring in parallel; and lipid dissolution is a promising approach to assess excipient effects for lymphotropic drugs. Example 1: Permeability-Controlled Absorption Etoricoxib is a weak base and is classified as a BCS II drug substance. A study by Okumu et al. showed that, if a transfer model from the acidic stomach conditions into FaSSIF was used, the drug solubility was increased in the simulated intestinal fluid compared to its equilibrium solubility. Essentially, a supersaturated drug solution was formed. Then, a flow-through cell combined with a perfusion protocol mimicking the stomach and the different small intestinal segments was used and a dissolution profile was generated. When this profile was used in simulation software, the observed clinical PK data were predicted with a better fit compared to USP type dissolution profiles. Furthermore, a comparison between a solution and the physiologically mimicking flow-through protocol showed that both resulted in superimposable predictions of the PK profiles. The study concluded that, if the drug is fully dissolved in the stomach, it can form a supersaturated solution in the intestine and behaves like a BCS class I drug. Therefore, the AS&D apparatus may be more appropriate for such BCS IIb drug molecules. Example 2: Dissolution-Controlled Absorption Montelukast sodium is a highly lipophilic drug with acid and basic functional groups. It is a BCS II/IV drug substance. A comparison between dissolution profiles from a USP type 2 apparatus with biorelevant media versus a flow-through protocol using physiologically adapted conditions showed significant differences. In the flow-through cell, the drug release was slower in the first 90 min compared to the USP type test. However, when the data were used in GastroPlus, the flow-through data matched the observed clinical data better than when other dissolution profiles were used as input. An alternative apparatus to the flow-through cell is based on the AS&D apparatus with more compartments. This method is also known as in vivo Predictive Dissolution (iPD). Example 3: Lysosomal Trapping Lysosomal trapping is a potential mechanism to explain slow availability of lipophilic weak bases that otherwise are expected to rapidly appear in the postabsorptive systemic circulation. Predictability of lysosomal trapping is not well developed, although recent efforts aim to standardize testing for lysosomal trapping. Lysosomes are enzyme filled vesicles in the cytoplasm that maintain a low pH inside. A weak base such as dextromethorphan is highly lipophilic at the pH inside of an enterocyte. When the molecule crosses the lipophilic membrane of the lysosome, it finds itself at a much lower pH (4.5–5.5). Here, its hydrophilicity significantly increases due to the drop in pH. Due to this shift in its lipophilic properties, the molecule now needs much longer to exit the lysosome. This is a potential reason it takes more than 16 h for the drug to appear completely in the systemic circulation. Based on simulations, the drug is predicted to completely dissolve in the GI tract and exhibit good permeability. The fraction of the dose absorbed into the enterocytes is about 100% within 2 h. The observed time lapse in the appearance in the systemic circulation is likely due to lysosomal trapping. For drugs such as dextromethorphan, there is a lag time between the fraction of the dose absorbed into the enterocyte and the drug plasma levels. Setting dissolution specifications on the fraction dose absorbed into the enterocyte rather than using drug plasma levels would be beneficial. Recently, an artificial lysosomal fluid and a side-by-side diffusion cell method were developed which can be used to screen for the tendency of drugs to be trapped by lysosomes. Example 4: Enteric Coated Dosage Forms Literature is full of reports that enteric coated dosage forms are failing in vivo. In vitro dissolution testing according to the pharmacopeias uses a two-stage approach in which a dosage form is first tested in acid and then in pH 6.8 phosphate buffer. However, if low buffer capacity carbonate buffer is used instead of phosphate buffer, then the dissolution behavior dramatically changes, and depending on the carbonate concentration, the opening of the enteric coat is delayed. Another in vitro study showed that acidic and basic drugs also impact the delay of the coat opening in the carbonate buffer. Acidic drugs delayed the opening process, while basic drugs increased the coat opening. In low carbonate buffer, the coat opening was much slower compared to phosphate buffer. This was also shown for a failed bioequivalence study of pantoprazole. The dissolutions of the test and reference products were similar in phosphate buffer but differed significantly in carbonate buffer. Thus, carbonate buffers or other surrogates are useful when developing enteric coated dosage forms. Example 5: Biphasic Dissolution Biphasic dissolution uses an organic layer on top of an aqueous dissolution medium as a sink for the lipophilic drug molecules. The test can be combined with a flow-through cell. In the present study, low buffer capacity (5 mmol) and low volumes (200 mL) were compared with regular strength phosphate buffer and 900 mL. Test tablets containing ibuprofen, which were made by direct compression or granulation using different excipients, were investigated. The results showed that low buffer capacity and low immersion medium volumes have the best ability to detect differences in the manufacturing processes and formulations. Furthermore, organic sinks could allow for a rebound in aqueous buffer pH after dissolved drugs, which initially caused a drop in the buffer pH due to their acidic nature, partition into the organic layer. Example 6: Lipid Dissolution The volume of the lymphatic system is larger than that of the vascular system. However, not much attention is given to this compartment in the context of PBBM. Today, many hydrophobic drugs are formulated into lipid drug delivery systems. Long-chain lipids can increase the lymphatic uptake of hydrophobic drugs. This occurs inside the enterocyte. Here, triglycerides and phospholipids are assembled into chylomicrons. Lipophilic drugs can be loaded into the chylomicrons and exit the enterocyte via the lymphatic pathway. An artificial lymphatic fluid was developed and tested regarding its sensitivity to lymphatic inhibition and enhancement uptake. In a study similar to that of biphasic dissolution, a lymphatic compartment was added to a dissolution vessel. Three commercially available drug products containing terbinafine were tested in a USP type vessel and a flow-through cell. The aqueous dissolution of one product was significantly different from that of the other two products. This might be due to excipient differences in the formulations. However, the three products also showed differences in the accumulation of the drug in the lymphatic compartment. This new method is a promising approach to assessing formulations for their lymphatic uptake potential. The model might contribute to in vitro bioequivalence guidelines for lymphotropic formulations. Conclusions First and foremost, the development of a dissolution method is driven by its purpose. When the development of a biorelevant, biopredictive dissolution method is the goal, the following may be considered: Flow-through cells and transfer-models are useful for dynamic dissolution protocols; small volumes and low buffer concentrations could be considered to mimic the physiological environments in the GI tract; carbonate buffers or suitable surrogates are helpful when evaluating enteric coated formulations; biphasic dissolution is an important tool to mimic the GI environment with dissolution and absorption occurring in parallel; and lipid dissolution is a promising approach to assess excipient effects for lymphotropic drugs. Discussion This breakout session expanded and continued the discussions of the Hot Topic B on “Best Practices for Development of Biopredictive Dissolution Methods” as input into PBBM by taking into consideration the following questions. 3.2.2.1 Q1: When Biorelevant Dissolution Methods (e.g., Multicompartmental) Are Necessary, What Is the Best Way to Use These Methods? Developing a dissolution method should be dependent on its intended use, i.e., whether the method would be used for quality control purposes or for PBBM. For example, for screening for precipitation of weak bases, two-stage tests or transfer models can be useful. Biorelevant dissolution methods mimic biological fluids and physiology and may be developed solely to support PBBM, with no link to the QC dissolution method. In this case, the biopredictive nature of the biorelevant method is verified through the PBBM. 3.2.2.2 Q2: How Many Different Experimental Conditions Should Be Used for a Single Batch? There is no fixed number of experimental conditions that should be used to develop a biopredictive dissolution method. However, relevant sets of experiments could be conducted taking into consideration GI physiology, bile salts, buffer capacity, physicochemical properties of the DS, product design, and release mechanisms to develop biopredictive dissolution methods as input for PBBM. 3.2.2.3 Q3: What Are the Pitfalls of Dissolution (e.g., Degradation, Mixture of Polymorphs, and Precipitation) to Be Careful about and How to Deal with It? Precipitation of drugs is an important consideration in developing a dissolution method. To study the effect of drug precipitation during dissolution testing, transfer experiments are often conducted to estimate the precipitation times as input into PBBM to determine the effect on the bioavailability. 3.2.2.4 Q4: How Do You Separate Artifacts of the Dissolution Test and Its Significance (or Nonsignificance) on in Vivo Response (e.g., Coning Is Often a Dissolution Issue, But Is Minimally a Concern in Vivo)? Sometimes multiple experiments are conducted to address dissolution artifacts such as coning, cross-linking in capsules, etc. The use of Apex vessels (previously known as PEAK vessels) to address coning is gaining regulatory acceptance; however, generating as much data as possible early in the product development to address these issues and determine if the developed dissolution method is biopredictive by conducting a PK study is often critical. 3.2.2.5 Q5: How Should Functional Excipient Effects Be Investigated? What Are the Appropriate Methods and How Should Dissolution Methods Be Developed to Evaluate Excipient Effects? Dissolution methods should take into consideration the effect of key/functional excipients, such as the impact of excipients on bulk vs surface pH. Excipients can alter drug release and absorption; therefore, evaluating the effect of functional excipients early on is crucial. Conducting a pilot in vivo PK study when an important functional excipient is present in the formulation may provide utility when building a dissolution safe space. 3.2.2.6 Q6: Depending on DS and DP Properties, What Level of Variation of Critical Biopharmaceutics Attributes (CBA) Is Needed to Demonstrate Discrimination and a Biopredictive Nature for the Dissolution Method? Depending on the product design, release mechanism, >10% variations in functional excipients, and process parameters of the final formulation could be used to demonstrate the discriminating ability of the biopredictive/QC dissolution method and their impact on the bioavailability of the drug product (especially for basic drugs that have pH modifiers and enteric coatings). Q1: When Biorelevant Dissolution Methods (e.g., Multicompartmental) Are Necessary, What Is the Best Way to Use These Methods? Developing a dissolution method should be dependent on its intended use, i.e., whether the method would be used for quality control purposes or for PBBM. For example, for screening for precipitation of weak bases, two-stage tests or transfer models can be useful. Biorelevant dissolution methods mimic biological fluids and physiology and may be developed solely to support PBBM, with no link to the QC dissolution method. In this case, the biopredictive nature of the biorelevant method is verified through the PBBM. Q2: How Many Different Experimental Conditions Should Be Used for a Single Batch? There is no fixed number of experimental conditions that should be used to develop a biopredictive dissolution method. However, relevant sets of experiments could be conducted taking into consideration GI physiology, bile salts, buffer capacity, physicochemical properties of the DS, product design, and release mechanisms to develop biopredictive dissolution methods as input for PBBM. Q3: What Are the Pitfalls of Dissolution (e.g., Degradation, Mixture of Polymorphs, and Precipitation) to Be Careful about and How to Deal with It? Precipitation of drugs is an important consideration in developing a dissolution method. To study the effect of drug precipitation during dissolution testing, transfer experiments are often conducted to estimate the precipitation times as input into PBBM to determine the effect on the bioavailability. Q4: How Do You Separate Artifacts of the Dissolution Test and Its Significance (or Nonsignificance) on in Vivo Response (e.g., Coning Is Often a Dissolution Issue, But Is Minimally a Concern in Vivo)? Sometimes multiple experiments are conducted to address dissolution artifacts such as coning, cross-linking in capsules, etc. The use of Apex vessels (previously known as PEAK vessels) to address coning is gaining regulatory acceptance; however, generating as much data as possible early in the product development to address these issues and determine if the developed dissolution method is biopredictive by conducting a PK study is often critical. Q5: How Should Functional Excipient Effects Be Investigated? What Are the Appropriate Methods and How Should Dissolution Methods Be Developed to Evaluate Excipient Effects? Dissolution methods should take into consideration the effect of key/functional excipients, such as the impact of excipients on bulk vs surface pH. Excipients can alter drug release and absorption; therefore, evaluating the effect of functional excipients early on is crucial. Conducting a pilot in vivo PK study when an important functional excipient is present in the formulation may provide utility when building a dissolution safe space. Q6: Depending on DS and DP Properties, What Level of Variation of Critical Biopharmaceutics Attributes (CBA) Is Needed to Demonstrate Discrimination and a Biopredictive Nature for the Dissolution Method? Depending on the product design, release mechanism, >10% variations in functional excipients, and process parameters of the final formulation could be used to demonstrate the discriminating ability of the biopredictive/QC dissolution method and their impact on the bioavailability of the drug product (especially for basic drugs that have pH modifiers and enteric coatings). BO Session C - Dissolution Part 2: Modeling in Vitro Dissolution Data This session began with Xavier Pepin (Simulations Plus, Inc.) and was led by Cordula Stillhart (Roche) and Luiza Borges (ANVISA), with Grace Chen (Takeda) and Megerle Scherholz (BMS) as scribes. 3.3.1 Presentation: Methods for Integrating Dissolution During breakout session C, Xavier Pepin presented a comprehensive overview and description of methods for integrating dissolution profiles into PBBMs, followed by practical considerations on the critical aspects when in vitro dissolution data were used for dissolution model development. This background served as a basis for developing and discussing checklists and a decision tree for the dissolution method selection to support the integration of dissolution data into PBBMs. There are many ways to integrate dissolution into most PBBM platforms. These methods range from lesser to more mechanistic as shown in . For an IR dosage form, using one method over other methods leads to certain assumptions being made regarding the parameters limiting in vivo dissolution. 3.3.1.1 Direct Input The least mechanistic method to integrate dissolution is to use direct input of the in vitro dissolution data into the model. In this case, the assumptions made are that the in vitro dissolution method is representative of the conditions prevailing in vivo, which govern the drug dissolution. In more detail, if such a method is used, one should confirm that neither solubility, drug dose, nor in vivo volume would be limiting the in vivo dissolution, since there are wide differences between the volumes used in vitro and the volumes observed in vivo. In addition, the in vitro hydrodynamics should be representative of in vivo conditions or not impact in vitro release, here again for the same reasons that the in vivo hydrodynamics are different from those in vitro. Such assumptions are reasonable when the drug substance is BCS 1 or BCS 1-like and when the formulation itself is governing the in vitro and in vivo dissolution. 3.3.1.2 Weibull Function The use of a Weibull function fitted to in vitro dissolution data is also a nonmechanistic approach as the in vivo release depends on time only. Similar assumptions to those supporting the direct input of dissolution data are made when using a Weibull function, although it is preferable to use Weibull over direct input, since the Weibull function provides for a smoother dissolution curve passing through the measured dissolution data. For direct input methods, as the number of time points for measuring dissolution is generally limited, interpolating dissolution data with a linear correlation between measurements may lead to inaccurate predictions of in vitro (and in vivo) dissolution. 3.3.1.3 Z-factor The use of the Z-factor vs pH profile or constant Z-factor should provide for a more mechanistic model. The Z-factor introduced by Takano et al. is a lumped factor which is the ratio of drug diffusion coefficient ( D ), divided by the product of true density (ρ), radius of the particle ( r 0 ), and thickness of the unstirred water layer ( h ). 1 It is evident from that the Z-factor can also be expressed as the initial drug particle radius in the formulation. It is also evident from this equation that there is only one bin (one particle size) in the Z-factor. Hence, if the observed in vitro dissolution rate shows more than one phase, a single bin may not be enough to adequately characterize the dissolution of the particles comprised in the formulation. Multiple release phases could arise from the presence of extra granular fine drug substance and granulated drug substance or the presence of drug substance particles that wet at different rates. In theory, there should not be a dependency of Z-factor on pH, as pH governs the drug solubility and is independently considered in the equation proposed by Takano et al. to predict in vitro and in vivo dissolution. In addition, the fact that the drug diffusion coefficient is an integral part of the Z-factor definition should lead to caution when employing the Z-factor to fit dissolution data obtained in media comprising surfactants. Indeed, the influence of surfactant micelle size spans an order of magnitude, which would affect the diffusion coefficient of the drug bound to micelles by the same order of magnitude. The size of common micelles summarized from literature data is shown in . − 3.3.1.4 P-PSD The product particle size distribution (P-PSD) was introduced by Pepin et al. , , where the disappearance of solid drug vs time is expressed as 2 where is the drug fraction unbound, D u is the diffusion coefficient of unbound drug, D b is the diffusion coefficient of micelle bound drug, A ( t ) is the available drug surface area at time t , h u ( t ) is the unstirred water layer thickness for unbound drug, h b ( t ) is the unstirred water layer thickness for micelle bound drug, C S,u is the unbound drug solubility at the surface of the crystal, and C u ( t ) is the unbound drug bulk concentration at time t . A (0) is the initial drug substance surface area which can be represented as a 1 to 10 bin spherical product particle size distribution, the P-PSD. Since the P-PSD can comprise from 1 to 10 bins, there is enough granularity to fit complex dissolution profiles including those presenting multiple phases. The number of bins can be tuned to the observed dissolution data, and it is recommended to start from the minimum number of bins and increase the number of bins until there is no difference in the predictive power across the dissolution data observed. The P-PSD approach can be applied to all dissolution equations beyond the one presented in . In fact, in platforms such as DDDPlus (Simulations Plus), SIVA (Certara), and MoBi (Open Systems Pharmacology [OSP]), the P-PSD can be fitted to observed dissolution data. In the above cases, the P-PSD will take the form of a mean spherical particle radius associated with a distribution across the mean. Only one mode of distribution is currently available in these platforms. The equation proposed by Pepin et al. stems from the approach proposed by Gamsiz et al.; however, it assumes immediate partitioning of drugs to micelles at the surface of the drug, and different thicknesses of the UWL for free and micelle bound drug, according to the equation proposed by Pohl et al: 3 A comparison between the use of Z-factor vs the P-PSD approach is presented in . The increased predictive performance of the P-PSD approach is related to its ability to differentiate the free and micelle bound drug and also the impact of the micelle size on the diffusion coefficient of micelle bound drugs. The Z-factor and P-PSD approach show similar shape description of the 100 mg acalabrutinib capsule batch L0505009 dissolution profile in phosphate buffer, pH 6.8. If this dissolution data is used to fit the Z-factor and P-PSD, prediction of dissolution of the same batch in media comprising bile salts show the advantage of the P-PSD over the Z-factor . The use of the apparent drug solubility in both tested media with the surfactant and the Z-factor fitted on the medium without the surfactant leads to an overestimation of the observed dissolution rate. The drug will dissolve slower due to the smaller diffusion coefficient of micelle bound drug which is best captured with the P-PSD approach and . Recently, two additional models for P-PSD were proposed which integrate the fluid velocity in the USP2 dissolution apparatus, the P-PSD HD, and one model predicting drug and excipient sedimentation and cone formation at the bottom of the USP2 vessel, the P-PSD HDC. These latter models are important to remove the potential bias coming from formulation sedimentation or to integrate the impact of fluid velocity in USP2, which would be important for large particles or large dosage forms such as eroding tablets or pellets. , The P-PSD concept stems from the fact that the drug substance particle size available for dissolution in the drug product cannot be measured adequately with sizing methods, such as laser diffraction applied to the drug substance (DS PSD). DS PSD is an important quality control of a starting material, but the impact of excipients and manufacturing process conditions on the drug substance area available for dissolution cannot be ignored. Process: It is well-known that compression forces during dry granulation or tablet manufacture will lead to fragmentation of brittle drug substances and excipients. Fragmentation will also affect larger particles at low compression forces and show little effect on smaller particles below a threshold size. , The use of a single Diffusion Layer Model (DLM) scale factor applied to the measured DS PSD to predict the effect of processing parameters on the DS surface area available in a final formulation cannot therefore be sustained theoretically. DS Particle Aggregation: Aggregation of primary particles in the DS is another factor that can induce a strong bias to predicting the DS surface available for dissolution. Loose or strong aggregates can form in a drug substance because of material properties, manufacturing process, or storage. Laser diffraction methods would typically size an aggregate of primary particles as one large particle with low surface to volume ratio, leading to an under-estimation of the drug surface area available for dissolution, as easily demonstrated by comparing laser diffraction predicted powder surface area to BET specific surface area for various batches of drug substances showing various levels of aggregation. Shape: The shape of particles will also influence the difference between laser diffraction predicted size and surface area measured with an orthogonal technique such as BET specific surface area. Laser diffraction techniques, which project a volume equivalent sphere for each particle, will introduce a bias to the measurements the further away the particle is from a spherical morphology. Wettability: Finally, the DS particle size cannot predict the impact of the drug substance wetting ability on the dissolution rate. Kim et al. have shown that dry coating the surface of drug crystals with a hydrophilic or hydrophobic material can influence aggregation of particles up to a certain surface coverage and also influence drug dissolution through the alteration of the surface energy of the drug, which would change how water can wet the drug surface. The correlation between drug wettability and dissolution has been reported in the literature, and the formulation scientists frequently employ wetting agents as excipients to improve the wettability of drugs in final formulations. The sensitivity of the dissolution rate to drug wettability is especially pronounced for small particles. For example, nanosizing technologies require the presence of surfactants to achieve the desired size and suspension stability, i.e., preventing aggregation and reducing speed of Ostwald ripening. For all of the reasons highlighted above, the size of DS particles measured prior to processing the DS into the final formulation is rarely a good predictor of the drug substance area available for dissolution. There may be rare exceptions to this rule, for example, if the formulation is a suspension or if the formulation is dry but comprises wettable amorphous spray dried drug particles encapsulated with low energy processes. The effect of formulation excipients and processing parameters should be integrated into the mechanistic modeling approaches of drug product dissolution. The P-PSD or Z-factors can serve this purpose. 3.3.2 Discussion The discussion was centered around 5 key questions. 3.3.2.1 Q1: What Is the Appropriate Dissolution Model for an IR Formulation? A recent review by Anand et al. showed that direct input, Weibull function, Z-factor, or P-PSD approaches were widely applied methods for integrating dissolution in PBBM. Mechanistic approaches like the Z-factor or the P-PSD were mostly used for low-solubility products, and mechanistic methods were applied in 60% of the 27 case studies. The advantages of mechanistic dissolution models over Weibull functions is that the between- and within-subject variability in terms of in vivo dissolution during population modeling can be captured in a more relevant way. Instead of applying random variation of dissolution (as can be achieved with a Weibull function), mechanistic models will rely on variation in system parameters (e.g., volumes, pH, transit times, composition in bile salts) to recalculate a different in vivo dissolution for the drug product for each simulation. This will warrant closer to reality in vivo dissolution compared to random variations. Also, the use of mechanistic models is the only option when the model is to be used to predict the impact of prandial state, pH related DDI, or in vivo dissolution across different populations, all situations where the GI physiological changes may profoundly affect in vivo dissolution rate and make it deviate from the dissolution rate measured in vitro. The criteria to select a dissolution method should therefore be driven by the understanding of the drug product release mechanism and the limitations to in vitro and in vivo dissolution, the impact of manufacturing process and formulation on dissolution, and how well this can be simulated with a given approach. For mechanistic models, it is recommended to generate dissolution data with the same batch in several media/conditions to be able to verify the choice of model and prediction performance in vitro prior to integration of the batch specific data (Z-factor or P-PSD) in the model. Ideally, to perform the fitting of dissolution data to extract the Z-factor or P-PSD, the method chosen would be discriminative, and the batch dissolution would show an adequate profile with possibly full dissolution in the medium considered. Practically, this would correspond to picking a dissolution method where most measured data comprise between 20% and 80% drug dissolved. Typically, a 1×-dissolution method described by Kuiper, where the drug dose divided by the dissolution volume nears the drug solubility in the dissolution medium, ensures maximal discrimination while allowing full dissolution. Using only one method to fit a mechanistic dissolution model over using all dissolution methods simultaneously is optimal, as the integration of nondiscriminating methods may lead to bias in the batch specific Z-factor or P-PSD determination. Based on the strengths and limitations of each individual dissolution modeling method presented during breakout session C, a decision tree for dissolution model selection was discussed with the audience. The proposed decision tree provides considerations for developing a dissolution model depending on the disintegration properties of the dosage form, the occurrence of coning or sedimentation during dissolution testing, and the sensitivity of the dissolution rate toward changes in agitation conditions, volume, dose, and pH, as well as the presence of surfactant in the dissolution medium. The proposed decision tree is tailored to oral IR dosage forms and presents a clear description of the modeling assumptions to be considered when selecting a dissolution model. There was general agreement from the attendees that such a decision tree for dissolution model selection provides a valuable tool for both biopharmaceutics modelers in the pharmaceutical industry as well as for regulators when reviewing submitted PBBM cases . 3.3.2.2 Q2: What Are the Input Parameters Required to Mechanistically Evaluate the in Vitro Dissolution Data? When developing a mechanistic dissolution model, the availability of high-quality input data for model parametrization should be a priority. This includes the availability of a sufficient number of in vitro dissolution profiles collected under relevant experimental conditions depending on the intended purpose of the model. For example, if the PBBM aims at predicting a pH-related DDI, then the dissolution model may need to be developed and validated using in vitro data generated under various pH conditions. Defining the experimental parameters describing the dissolution setup is prudent for each corresponding dissolution data set, and for dissolution media including surfactants, the properties of the micellar system should also be adequately characterized. presents a list of suggested data to collect and could serve as a checklist in the context of the dissolution model development. In addition to the in vitro data that are generated for direct input into the dissolution model, there might be a need to generate supplementary data to support some specific modeling assumptions or to mechanistically explain some anomalies. For example, if the slow dissolution in pure aqueous systems is attributed to poor drug wettability, this hypothesis may be strengthened by the generation of in vitro dissolution data, including a surfactant. Similarly, if in vitro dissolution is slow, presumably due to poor tablet disintegration, the hypothesis may be further supported by the generation of in vitro dissolution profiles of the pure DS or of drug product intermediates (granules or final blend prior to tablet compression). Such mechanistic investigations may not directly feed into the model but provide key information to increase the confidence in the selected model parameters and modeling assumptions. 3.3.2.3 Q3: What Are the Criteria and Acceptable Thresholds for in Vitro Dissolution Model Validation? If more than one mechanistic modeling method may be applicable, the calculation of model performance indicators such as the average fold error (AFE) and absolute average fold error (AAFE) can provide rationale for method choice. Ultimately, the prediction performances of various dissolution modeling methods in the PBBM could also be compared. Examples of dissolution modeling fitting and impact on PBBM prediction are also shared. The outcome can be found in the Supporting Information . 3.3.2.4 Q4: Which Are the Factors to Be Considered When Modeling Dissolution? Prior to the integration of dissolution data into a PBBM, a critical assessment of the quality and relevance of the experimental dissolution data may be useful. In this context, there are several factors to pay attention to, as summarized below. Agitation: The impact of agitation should be considered when choosing an integration method. All models are derived from the Noyes-Whitney equation (i.e., Johnson, , Wang-Flanagan, Takano, Gamsiz, Pepin, or Salehi, ) and rely on the definition of the UWL thickness around dissolving particles. The UWL thickness is a function of fluid velocity around the dissolving particle in the dissolution medium (in vitro and in vivo). When the fluid velocity tends to zero, the thickness of the UWL tends to the radius of the spherical particle; as an approximation, the UWL thickness is equal to the particle radius up to an upper limit of 30 μm, which is supported by simulations and experiments performed in the literature. , Also, this hypothesis fits with the low fluid velocity typically measured in vivo throughout the GI tract, where the average velocity is in the range of 1–2 cm/s, with transient peak velocities of more than 15 cm/s. − For particle sizes larger than 30 μm, the UWL thickness typically depends on the agitation as shown for example by Scholz et al. When a significant impact of agitation on the dissolution rate is shown, the in vitro dissolution model should accommodate the impact of hydrodynamics. Surface pH and Surface Solubility: When the drug shows acidic or basic moieties, depending on the pH and composition of the aqueous dissolution medium, an acid–base reaction can happen locally at the surface of the dissolving drug particles, without necessarily affecting the bulk pH. This reaction will change the pH within the UWL. The maximal changes are observed at the surface of the drug. This phenomenon was described theoretically and experimentally in the literature for weak acids, bases, and their salts thanks to the work of Higuchi et al., , Mooney et al., , and Serajuddin et al. , Since the drug surface solubility drives the dissolution rate, it is imperative to consider the drug surface solubility to mechanistically model in vitro and in vivo dissolution rates. , , , If there is a rapid phase change, such as salt disproportionation to the free base, then the free base surface solubility at the medium pH should be determined. Surface pH, also known as microenvironmental pH, is driven by the drug substance but can also be largely influenced by excipients added to the formulation, , and excipients should be considered when analyzing dissolution data. Formulation composition should always be known so as to evaluate potential interactions between the drug and excipients during dissolution but also in the solid state, as these reactions can also lead to polymorphic transitions. Chemical Degradation: Chemical degradation can happen during dissolution and impact the amount of drug that is dissolved. A typical example is that of rifampicin dissolution in presence of or without isoniazid. The presence of bell shape dissolution curves or the existence of a dissolution plateau less than that of the theoretical batch assay could indicate the potential for in vitro degradation. The degradation rate should be measured in a separate experiment with solubilized drug by measuring the drug concentration over time in the dissolution medium. If degradation is confirmed, it can be integrated into the model (in vitro and in vivo) to account for a better fit of in vitro dissolution and amount of drug available for in vivo absorption. Physical Degradation: Bell shapes or plateaus during dissolution may also demonstrate (beyond the lack of enough solubility or medium volume to dissolve the full drug dose) that a polymorphic drug transition happens or that there is a polymorphic impurity in the drug substance. For example, the mixture of different polymorphic forms with different solubility values will lead to a variation in the rate and extent of dissolution. Precipitation from an amorphous to a crystalline form, or from a salt/cocrystal to its free form, will lead to a change in dissolution rate or even to complete stop of drug dissolution if the precipitation occurs on the surface of the drug product. , The presence of cosolvents or polymers can also change the rate and extent of surface precipitation, and, where relevant, such excipients should be considered critical to the product performance. Drug Product Disintegration: The impact of capsule opening, , or tablet disintegration, on the dissolution profile has been widely presented in the literature. Since dissolution models assume that all the drug particles are available at time zero for dissolution, the disintegration time or capsule opening time should be removed from the observed dissolution data prior to fitting the dissolution rate. This can be achieved by subtracting the time needed for drug release from the observed dissolution time. If possible, models for capsule opening and tablet disintegration should be fitted to in vitro data and applied to in vivo data. It is also known that in vivo capsule opening, or in vivo tablet disintegration, , takes longer than the time observed during USP disintegration testing and would impact gastric residence in vivo. Method Artificial Effects: In addition to the intrinsic properties of the drug substance and drug product described above, the in vitro dissolution performance may be affected by artificial effects in the in vitro dissolution setup, which may not necessarily have relevance for in vivo dissolution. Such effects include in vitro sedimentation or coning and the interaction with components of the dissolution medium. In vitro sedimentation introduces a bias to the dissolution rate and extent and should be corrected prior to PBBM introduction. The solubility product of ionizable compounds in the presence of specific buffer salts and/or surfactants should be carefully considered (e.g., formation of less soluble lauryl sulfate salts in the presence of SLS or reduced hydration of Eudragit RS in the presence of chloride ions in the dissolution medium). In summary, a robust understanding of the experimental dissolution data is required to ensure the development of a meaningful dissolution model able to capture the in vivo performance in a mechanistic manner. To facilitate this process, the critical aspects to consider are summarized in , which may serve as a checklist in the context of in vitro data evaluation for the dissolution model development. 3.3.2.5 Q5: What Is the Appropriate Quality and Quantity of Data to Be Generated to Allow Dissolution Model Validation? The quality of data is defined by the evaluation of potential factors to consider which may introduce a bias to the dissolution measurement as shown in the check-list for in vitro data evaluation prior to dissolution model development , leading to the list of necessary input parameters needed for dissolution modeling . In terms of quantity, there is no definite number at this stage, but it seems that n = 3 different conditions covering the physiological pH range could be sufficient. Care should be taken to obtain adequate release profiles in each dissolution method (see Q1) and to favor dissolution methods where the main component/parameter in the dissolution medium/method influencing drug product dissolution is integrated. For example, for large particles or extended-release matrixes, dissolution data with different agitation rates often provide insight into the release mechanism. For drug substances that are sensitive to pH, covering the physiological pH range is typical. Finally, for drugs that are sensitive to the presence of surfactants in the medium, a comparison of dissolution profiles with synthetic and natural occurring surfactants is warranted. Presentation: Methods for Integrating Dissolution During breakout session C, Xavier Pepin presented a comprehensive overview and description of methods for integrating dissolution profiles into PBBMs, followed by practical considerations on the critical aspects when in vitro dissolution data were used for dissolution model development. This background served as a basis for developing and discussing checklists and a decision tree for the dissolution method selection to support the integration of dissolution data into PBBMs. There are many ways to integrate dissolution into most PBBM platforms. These methods range from lesser to more mechanistic as shown in . For an IR dosage form, using one method over other methods leads to certain assumptions being made regarding the parameters limiting in vivo dissolution. 3.3.1.1 Direct Input The least mechanistic method to integrate dissolution is to use direct input of the in vitro dissolution data into the model. In this case, the assumptions made are that the in vitro dissolution method is representative of the conditions prevailing in vivo, which govern the drug dissolution. In more detail, if such a method is used, one should confirm that neither solubility, drug dose, nor in vivo volume would be limiting the in vivo dissolution, since there are wide differences between the volumes used in vitro and the volumes observed in vivo. In addition, the in vitro hydrodynamics should be representative of in vivo conditions or not impact in vitro release, here again for the same reasons that the in vivo hydrodynamics are different from those in vitro. Such assumptions are reasonable when the drug substance is BCS 1 or BCS 1-like and when the formulation itself is governing the in vitro and in vivo dissolution. 3.3.1.2 Weibull Function The use of a Weibull function fitted to in vitro dissolution data is also a nonmechanistic approach as the in vivo release depends on time only. Similar assumptions to those supporting the direct input of dissolution data are made when using a Weibull function, although it is preferable to use Weibull over direct input, since the Weibull function provides for a smoother dissolution curve passing through the measured dissolution data. For direct input methods, as the number of time points for measuring dissolution is generally limited, interpolating dissolution data with a linear correlation between measurements may lead to inaccurate predictions of in vitro (and in vivo) dissolution. 3.3.1.3 Z-factor The use of the Z-factor vs pH profile or constant Z-factor should provide for a more mechanistic model. The Z-factor introduced by Takano et al. is a lumped factor which is the ratio of drug diffusion coefficient ( D ), divided by the product of true density (ρ), radius of the particle ( r 0 ), and thickness of the unstirred water layer ( h ). 1 It is evident from that the Z-factor can also be expressed as the initial drug particle radius in the formulation. It is also evident from this equation that there is only one bin (one particle size) in the Z-factor. Hence, if the observed in vitro dissolution rate shows more than one phase, a single bin may not be enough to adequately characterize the dissolution of the particles comprised in the formulation. Multiple release phases could arise from the presence of extra granular fine drug substance and granulated drug substance or the presence of drug substance particles that wet at different rates. In theory, there should not be a dependency of Z-factor on pH, as pH governs the drug solubility and is independently considered in the equation proposed by Takano et al. to predict in vitro and in vivo dissolution. In addition, the fact that the drug diffusion coefficient is an integral part of the Z-factor definition should lead to caution when employing the Z-factor to fit dissolution data obtained in media comprising surfactants. Indeed, the influence of surfactant micelle size spans an order of magnitude, which would affect the diffusion coefficient of the drug bound to micelles by the same order of magnitude. The size of common micelles summarized from literature data is shown in . − 3.3.1.4 P-PSD The product particle size distribution (P-PSD) was introduced by Pepin et al. , , where the disappearance of solid drug vs time is expressed as 2 where is the drug fraction unbound, D u is the diffusion coefficient of unbound drug, D b is the diffusion coefficient of micelle bound drug, A ( t ) is the available drug surface area at time t , h u ( t ) is the unstirred water layer thickness for unbound drug, h b ( t ) is the unstirred water layer thickness for micelle bound drug, C S,u is the unbound drug solubility at the surface of the crystal, and C u ( t ) is the unbound drug bulk concentration at time t . A (0) is the initial drug substance surface area which can be represented as a 1 to 10 bin spherical product particle size distribution, the P-PSD. Since the P-PSD can comprise from 1 to 10 bins, there is enough granularity to fit complex dissolution profiles including those presenting multiple phases. The number of bins can be tuned to the observed dissolution data, and it is recommended to start from the minimum number of bins and increase the number of bins until there is no difference in the predictive power across the dissolution data observed. The P-PSD approach can be applied to all dissolution equations beyond the one presented in . In fact, in platforms such as DDDPlus (Simulations Plus), SIVA (Certara), and MoBi (Open Systems Pharmacology [OSP]), the P-PSD can be fitted to observed dissolution data. In the above cases, the P-PSD will take the form of a mean spherical particle radius associated with a distribution across the mean. Only one mode of distribution is currently available in these platforms. The equation proposed by Pepin et al. stems from the approach proposed by Gamsiz et al.; however, it assumes immediate partitioning of drugs to micelles at the surface of the drug, and different thicknesses of the UWL for free and micelle bound drug, according to the equation proposed by Pohl et al: 3 A comparison between the use of Z-factor vs the P-PSD approach is presented in . The increased predictive performance of the P-PSD approach is related to its ability to differentiate the free and micelle bound drug and also the impact of the micelle size on the diffusion coefficient of micelle bound drugs. The Z-factor and P-PSD approach show similar shape description of the 100 mg acalabrutinib capsule batch L0505009 dissolution profile in phosphate buffer, pH 6.8. If this dissolution data is used to fit the Z-factor and P-PSD, prediction of dissolution of the same batch in media comprising bile salts show the advantage of the P-PSD over the Z-factor . The use of the apparent drug solubility in both tested media with the surfactant and the Z-factor fitted on the medium without the surfactant leads to an overestimation of the observed dissolution rate. The drug will dissolve slower due to the smaller diffusion coefficient of micelle bound drug which is best captured with the P-PSD approach and . Recently, two additional models for P-PSD were proposed which integrate the fluid velocity in the USP2 dissolution apparatus, the P-PSD HD, and one model predicting drug and excipient sedimentation and cone formation at the bottom of the USP2 vessel, the P-PSD HDC. These latter models are important to remove the potential bias coming from formulation sedimentation or to integrate the impact of fluid velocity in USP2, which would be important for large particles or large dosage forms such as eroding tablets or pellets. , The P-PSD concept stems from the fact that the drug substance particle size available for dissolution in the drug product cannot be measured adequately with sizing methods, such as laser diffraction applied to the drug substance (DS PSD). DS PSD is an important quality control of a starting material, but the impact of excipients and manufacturing process conditions on the drug substance area available for dissolution cannot be ignored. Process: It is well-known that compression forces during dry granulation or tablet manufacture will lead to fragmentation of brittle drug substances and excipients. Fragmentation will also affect larger particles at low compression forces and show little effect on smaller particles below a threshold size. , The use of a single Diffusion Layer Model (DLM) scale factor applied to the measured DS PSD to predict the effect of processing parameters on the DS surface area available in a final formulation cannot therefore be sustained theoretically. DS Particle Aggregation: Aggregation of primary particles in the DS is another factor that can induce a strong bias to predicting the DS surface available for dissolution. Loose or strong aggregates can form in a drug substance because of material properties, manufacturing process, or storage. Laser diffraction methods would typically size an aggregate of primary particles as one large particle with low surface to volume ratio, leading to an under-estimation of the drug surface area available for dissolution, as easily demonstrated by comparing laser diffraction predicted powder surface area to BET specific surface area for various batches of drug substances showing various levels of aggregation. Shape: The shape of particles will also influence the difference between laser diffraction predicted size and surface area measured with an orthogonal technique such as BET specific surface area. Laser diffraction techniques, which project a volume equivalent sphere for each particle, will introduce a bias to the measurements the further away the particle is from a spherical morphology. Wettability: Finally, the DS particle size cannot predict the impact of the drug substance wetting ability on the dissolution rate. Kim et al. have shown that dry coating the surface of drug crystals with a hydrophilic or hydrophobic material can influence aggregation of particles up to a certain surface coverage and also influence drug dissolution through the alteration of the surface energy of the drug, which would change how water can wet the drug surface. The correlation between drug wettability and dissolution has been reported in the literature, and the formulation scientists frequently employ wetting agents as excipients to improve the wettability of drugs in final formulations. The sensitivity of the dissolution rate to drug wettability is especially pronounced for small particles. For example, nanosizing technologies require the presence of surfactants to achieve the desired size and suspension stability, i.e., preventing aggregation and reducing speed of Ostwald ripening. For all of the reasons highlighted above, the size of DS particles measured prior to processing the DS into the final formulation is rarely a good predictor of the drug substance area available for dissolution. There may be rare exceptions to this rule, for example, if the formulation is a suspension or if the formulation is dry but comprises wettable amorphous spray dried drug particles encapsulated with low energy processes. The effect of formulation excipients and processing parameters should be integrated into the mechanistic modeling approaches of drug product dissolution. The P-PSD or Z-factors can serve this purpose. Direct Input The least mechanistic method to integrate dissolution is to use direct input of the in vitro dissolution data into the model. In this case, the assumptions made are that the in vitro dissolution method is representative of the conditions prevailing in vivo, which govern the drug dissolution. In more detail, if such a method is used, one should confirm that neither solubility, drug dose, nor in vivo volume would be limiting the in vivo dissolution, since there are wide differences between the volumes used in vitro and the volumes observed in vivo. In addition, the in vitro hydrodynamics should be representative of in vivo conditions or not impact in vitro release, here again for the same reasons that the in vivo hydrodynamics are different from those in vitro. Such assumptions are reasonable when the drug substance is BCS 1 or BCS 1-like and when the formulation itself is governing the in vitro and in vivo dissolution. Weibull Function The use of a Weibull function fitted to in vitro dissolution data is also a nonmechanistic approach as the in vivo release depends on time only. Similar assumptions to those supporting the direct input of dissolution data are made when using a Weibull function, although it is preferable to use Weibull over direct input, since the Weibull function provides for a smoother dissolution curve passing through the measured dissolution data. For direct input methods, as the number of time points for measuring dissolution is generally limited, interpolating dissolution data with a linear correlation between measurements may lead to inaccurate predictions of in vitro (and in vivo) dissolution. Z-factor The use of the Z-factor vs pH profile or constant Z-factor should provide for a more mechanistic model. The Z-factor introduced by Takano et al. is a lumped factor which is the ratio of drug diffusion coefficient ( D ), divided by the product of true density (ρ), radius of the particle ( r 0 ), and thickness of the unstirred water layer ( h ). 1 It is evident from that the Z-factor can also be expressed as the initial drug particle radius in the formulation. It is also evident from this equation that there is only one bin (one particle size) in the Z-factor. Hence, if the observed in vitro dissolution rate shows more than one phase, a single bin may not be enough to adequately characterize the dissolution of the particles comprised in the formulation. Multiple release phases could arise from the presence of extra granular fine drug substance and granulated drug substance or the presence of drug substance particles that wet at different rates. In theory, there should not be a dependency of Z-factor on pH, as pH governs the drug solubility and is independently considered in the equation proposed by Takano et al. to predict in vitro and in vivo dissolution. In addition, the fact that the drug diffusion coefficient is an integral part of the Z-factor definition should lead to caution when employing the Z-factor to fit dissolution data obtained in media comprising surfactants. Indeed, the influence of surfactant micelle size spans an order of magnitude, which would affect the diffusion coefficient of the drug bound to micelles by the same order of magnitude. The size of common micelles summarized from literature data is shown in . − P-PSD The product particle size distribution (P-PSD) was introduced by Pepin et al. , , where the disappearance of solid drug vs time is expressed as 2 where is the drug fraction unbound, D u is the diffusion coefficient of unbound drug, D b is the diffusion coefficient of micelle bound drug, A ( t ) is the available drug surface area at time t , h u ( t ) is the unstirred water layer thickness for unbound drug, h b ( t ) is the unstirred water layer thickness for micelle bound drug, C S,u is the unbound drug solubility at the surface of the crystal, and C u ( t ) is the unbound drug bulk concentration at time t . A (0) is the initial drug substance surface area which can be represented as a 1 to 10 bin spherical product particle size distribution, the P-PSD. Since the P-PSD can comprise from 1 to 10 bins, there is enough granularity to fit complex dissolution profiles including those presenting multiple phases. The number of bins can be tuned to the observed dissolution data, and it is recommended to start from the minimum number of bins and increase the number of bins until there is no difference in the predictive power across the dissolution data observed. The P-PSD approach can be applied to all dissolution equations beyond the one presented in . In fact, in platforms such as DDDPlus (Simulations Plus), SIVA (Certara), and MoBi (Open Systems Pharmacology [OSP]), the P-PSD can be fitted to observed dissolution data. In the above cases, the P-PSD will take the form of a mean spherical particle radius associated with a distribution across the mean. Only one mode of distribution is currently available in these platforms. The equation proposed by Pepin et al. stems from the approach proposed by Gamsiz et al.; however, it assumes immediate partitioning of drugs to micelles at the surface of the drug, and different thicknesses of the UWL for free and micelle bound drug, according to the equation proposed by Pohl et al: 3 A comparison between the use of Z-factor vs the P-PSD approach is presented in . The increased predictive performance of the P-PSD approach is related to its ability to differentiate the free and micelle bound drug and also the impact of the micelle size on the diffusion coefficient of micelle bound drugs. The Z-factor and P-PSD approach show similar shape description of the 100 mg acalabrutinib capsule batch L0505009 dissolution profile in phosphate buffer, pH 6.8. If this dissolution data is used to fit the Z-factor and P-PSD, prediction of dissolution of the same batch in media comprising bile salts show the advantage of the P-PSD over the Z-factor . The use of the apparent drug solubility in both tested media with the surfactant and the Z-factor fitted on the medium without the surfactant leads to an overestimation of the observed dissolution rate. The drug will dissolve slower due to the smaller diffusion coefficient of micelle bound drug which is best captured with the P-PSD approach and . Recently, two additional models for P-PSD were proposed which integrate the fluid velocity in the USP2 dissolution apparatus, the P-PSD HD, and one model predicting drug and excipient sedimentation and cone formation at the bottom of the USP2 vessel, the P-PSD HDC. These latter models are important to remove the potential bias coming from formulation sedimentation or to integrate the impact of fluid velocity in USP2, which would be important for large particles or large dosage forms such as eroding tablets or pellets. , The P-PSD concept stems from the fact that the drug substance particle size available for dissolution in the drug product cannot be measured adequately with sizing methods, such as laser diffraction applied to the drug substance (DS PSD). DS PSD is an important quality control of a starting material, but the impact of excipients and manufacturing process conditions on the drug substance area available for dissolution cannot be ignored. Process: It is well-known that compression forces during dry granulation or tablet manufacture will lead to fragmentation of brittle drug substances and excipients. Fragmentation will also affect larger particles at low compression forces and show little effect on smaller particles below a threshold size. , The use of a single Diffusion Layer Model (DLM) scale factor applied to the measured DS PSD to predict the effect of processing parameters on the DS surface area available in a final formulation cannot therefore be sustained theoretically. DS Particle Aggregation: Aggregation of primary particles in the DS is another factor that can induce a strong bias to predicting the DS surface available for dissolution. Loose or strong aggregates can form in a drug substance because of material properties, manufacturing process, or storage. Laser diffraction methods would typically size an aggregate of primary particles as one large particle with low surface to volume ratio, leading to an under-estimation of the drug surface area available for dissolution, as easily demonstrated by comparing laser diffraction predicted powder surface area to BET specific surface area for various batches of drug substances showing various levels of aggregation. Shape: The shape of particles will also influence the difference between laser diffraction predicted size and surface area measured with an orthogonal technique such as BET specific surface area. Laser diffraction techniques, which project a volume equivalent sphere for each particle, will introduce a bias to the measurements the further away the particle is from a spherical morphology. Wettability: Finally, the DS particle size cannot predict the impact of the drug substance wetting ability on the dissolution rate. Kim et al. have shown that dry coating the surface of drug crystals with a hydrophilic or hydrophobic material can influence aggregation of particles up to a certain surface coverage and also influence drug dissolution through the alteration of the surface energy of the drug, which would change how water can wet the drug surface. The correlation between drug wettability and dissolution has been reported in the literature, and the formulation scientists frequently employ wetting agents as excipients to improve the wettability of drugs in final formulations. The sensitivity of the dissolution rate to drug wettability is especially pronounced for small particles. For example, nanosizing technologies require the presence of surfactants to achieve the desired size and suspension stability, i.e., preventing aggregation and reducing speed of Ostwald ripening. For all of the reasons highlighted above, the size of DS particles measured prior to processing the DS into the final formulation is rarely a good predictor of the drug substance area available for dissolution. There may be rare exceptions to this rule, for example, if the formulation is a suspension or if the formulation is dry but comprises wettable amorphous spray dried drug particles encapsulated with low energy processes. The effect of formulation excipients and processing parameters should be integrated into the mechanistic modeling approaches of drug product dissolution. The P-PSD or Z-factors can serve this purpose. Discussion The discussion was centered around 5 key questions. 3.3.2.1 Q1: What Is the Appropriate Dissolution Model for an IR Formulation? A recent review by Anand et al. showed that direct input, Weibull function, Z-factor, or P-PSD approaches were widely applied methods for integrating dissolution in PBBM. Mechanistic approaches like the Z-factor or the P-PSD were mostly used for low-solubility products, and mechanistic methods were applied in 60% of the 27 case studies. The advantages of mechanistic dissolution models over Weibull functions is that the between- and within-subject variability in terms of in vivo dissolution during population modeling can be captured in a more relevant way. Instead of applying random variation of dissolution (as can be achieved with a Weibull function), mechanistic models will rely on variation in system parameters (e.g., volumes, pH, transit times, composition in bile salts) to recalculate a different in vivo dissolution for the drug product for each simulation. This will warrant closer to reality in vivo dissolution compared to random variations. Also, the use of mechanistic models is the only option when the model is to be used to predict the impact of prandial state, pH related DDI, or in vivo dissolution across different populations, all situations where the GI physiological changes may profoundly affect in vivo dissolution rate and make it deviate from the dissolution rate measured in vitro. The criteria to select a dissolution method should therefore be driven by the understanding of the drug product release mechanism and the limitations to in vitro and in vivo dissolution, the impact of manufacturing process and formulation on dissolution, and how well this can be simulated with a given approach. For mechanistic models, it is recommended to generate dissolution data with the same batch in several media/conditions to be able to verify the choice of model and prediction performance in vitro prior to integration of the batch specific data (Z-factor or P-PSD) in the model. Ideally, to perform the fitting of dissolution data to extract the Z-factor or P-PSD, the method chosen would be discriminative, and the batch dissolution would show an adequate profile with possibly full dissolution in the medium considered. Practically, this would correspond to picking a dissolution method where most measured data comprise between 20% and 80% drug dissolved. Typically, a 1×-dissolution method described by Kuiper, where the drug dose divided by the dissolution volume nears the drug solubility in the dissolution medium, ensures maximal discrimination while allowing full dissolution. Using only one method to fit a mechanistic dissolution model over using all dissolution methods simultaneously is optimal, as the integration of nondiscriminating methods may lead to bias in the batch specific Z-factor or P-PSD determination. Based on the strengths and limitations of each individual dissolution modeling method presented during breakout session C, a decision tree for dissolution model selection was discussed with the audience. The proposed decision tree provides considerations for developing a dissolution model depending on the disintegration properties of the dosage form, the occurrence of coning or sedimentation during dissolution testing, and the sensitivity of the dissolution rate toward changes in agitation conditions, volume, dose, and pH, as well as the presence of surfactant in the dissolution medium. The proposed decision tree is tailored to oral IR dosage forms and presents a clear description of the modeling assumptions to be considered when selecting a dissolution model. There was general agreement from the attendees that such a decision tree for dissolution model selection provides a valuable tool for both biopharmaceutics modelers in the pharmaceutical industry as well as for regulators when reviewing submitted PBBM cases . 3.3.2.2 Q2: What Are the Input Parameters Required to Mechanistically Evaluate the in Vitro Dissolution Data? When developing a mechanistic dissolution model, the availability of high-quality input data for model parametrization should be a priority. This includes the availability of a sufficient number of in vitro dissolution profiles collected under relevant experimental conditions depending on the intended purpose of the model. For example, if the PBBM aims at predicting a pH-related DDI, then the dissolution model may need to be developed and validated using in vitro data generated under various pH conditions. Defining the experimental parameters describing the dissolution setup is prudent for each corresponding dissolution data set, and for dissolution media including surfactants, the properties of the micellar system should also be adequately characterized. presents a list of suggested data to collect and could serve as a checklist in the context of the dissolution model development. In addition to the in vitro data that are generated for direct input into the dissolution model, there might be a need to generate supplementary data to support some specific modeling assumptions or to mechanistically explain some anomalies. For example, if the slow dissolution in pure aqueous systems is attributed to poor drug wettability, this hypothesis may be strengthened by the generation of in vitro dissolution data, including a surfactant. Similarly, if in vitro dissolution is slow, presumably due to poor tablet disintegration, the hypothesis may be further supported by the generation of in vitro dissolution profiles of the pure DS or of drug product intermediates (granules or final blend prior to tablet compression). Such mechanistic investigations may not directly feed into the model but provide key information to increase the confidence in the selected model parameters and modeling assumptions. 3.3.2.3 Q3: What Are the Criteria and Acceptable Thresholds for in Vitro Dissolution Model Validation? If more than one mechanistic modeling method may be applicable, the calculation of model performance indicators such as the average fold error (AFE) and absolute average fold error (AAFE) can provide rationale for method choice. Ultimately, the prediction performances of various dissolution modeling methods in the PBBM could also be compared. Examples of dissolution modeling fitting and impact on PBBM prediction are also shared. The outcome can be found in the Supporting Information . 3.3.2.4 Q4: Which Are the Factors to Be Considered When Modeling Dissolution? Prior to the integration of dissolution data into a PBBM, a critical assessment of the quality and relevance of the experimental dissolution data may be useful. In this context, there are several factors to pay attention to, as summarized below. Agitation: The impact of agitation should be considered when choosing an integration method. All models are derived from the Noyes-Whitney equation (i.e., Johnson, , Wang-Flanagan, Takano, Gamsiz, Pepin, or Salehi, ) and rely on the definition of the UWL thickness around dissolving particles. The UWL thickness is a function of fluid velocity around the dissolving particle in the dissolution medium (in vitro and in vivo). When the fluid velocity tends to zero, the thickness of the UWL tends to the radius of the spherical particle; as an approximation, the UWL thickness is equal to the particle radius up to an upper limit of 30 μm, which is supported by simulations and experiments performed in the literature. , Also, this hypothesis fits with the low fluid velocity typically measured in vivo throughout the GI tract, where the average velocity is in the range of 1–2 cm/s, with transient peak velocities of more than 15 cm/s. − For particle sizes larger than 30 μm, the UWL thickness typically depends on the agitation as shown for example by Scholz et al. When a significant impact of agitation on the dissolution rate is shown, the in vitro dissolution model should accommodate the impact of hydrodynamics. Surface pH and Surface Solubility: When the drug shows acidic or basic moieties, depending on the pH and composition of the aqueous dissolution medium, an acid–base reaction can happen locally at the surface of the dissolving drug particles, without necessarily affecting the bulk pH. This reaction will change the pH within the UWL. The maximal changes are observed at the surface of the drug. This phenomenon was described theoretically and experimentally in the literature for weak acids, bases, and their salts thanks to the work of Higuchi et al., , Mooney et al., , and Serajuddin et al. , Since the drug surface solubility drives the dissolution rate, it is imperative to consider the drug surface solubility to mechanistically model in vitro and in vivo dissolution rates. , , , If there is a rapid phase change, such as salt disproportionation to the free base, then the free base surface solubility at the medium pH should be determined. Surface pH, also known as microenvironmental pH, is driven by the drug substance but can also be largely influenced by excipients added to the formulation, , and excipients should be considered when analyzing dissolution data. Formulation composition should always be known so as to evaluate potential interactions between the drug and excipients during dissolution but also in the solid state, as these reactions can also lead to polymorphic transitions. Chemical Degradation: Chemical degradation can happen during dissolution and impact the amount of drug that is dissolved. A typical example is that of rifampicin dissolution in presence of or without isoniazid. The presence of bell shape dissolution curves or the existence of a dissolution plateau less than that of the theoretical batch assay could indicate the potential for in vitro degradation. The degradation rate should be measured in a separate experiment with solubilized drug by measuring the drug concentration over time in the dissolution medium. If degradation is confirmed, it can be integrated into the model (in vitro and in vivo) to account for a better fit of in vitro dissolution and amount of drug available for in vivo absorption. Physical Degradation: Bell shapes or plateaus during dissolution may also demonstrate (beyond the lack of enough solubility or medium volume to dissolve the full drug dose) that a polymorphic drug transition happens or that there is a polymorphic impurity in the drug substance. For example, the mixture of different polymorphic forms with different solubility values will lead to a variation in the rate and extent of dissolution. Precipitation from an amorphous to a crystalline form, or from a salt/cocrystal to its free form, will lead to a change in dissolution rate or even to complete stop of drug dissolution if the precipitation occurs on the surface of the drug product. , The presence of cosolvents or polymers can also change the rate and extent of surface precipitation, and, where relevant, such excipients should be considered critical to the product performance. Drug Product Disintegration: The impact of capsule opening, , or tablet disintegration, on the dissolution profile has been widely presented in the literature. Since dissolution models assume that all the drug particles are available at time zero for dissolution, the disintegration time or capsule opening time should be removed from the observed dissolution data prior to fitting the dissolution rate. This can be achieved by subtracting the time needed for drug release from the observed dissolution time. If possible, models for capsule opening and tablet disintegration should be fitted to in vitro data and applied to in vivo data. It is also known that in vivo capsule opening, or in vivo tablet disintegration, , takes longer than the time observed during USP disintegration testing and would impact gastric residence in vivo. Method Artificial Effects: In addition to the intrinsic properties of the drug substance and drug product described above, the in vitro dissolution performance may be affected by artificial effects in the in vitro dissolution setup, which may not necessarily have relevance for in vivo dissolution. Such effects include in vitro sedimentation or coning and the interaction with components of the dissolution medium. In vitro sedimentation introduces a bias to the dissolution rate and extent and should be corrected prior to PBBM introduction. The solubility product of ionizable compounds in the presence of specific buffer salts and/or surfactants should be carefully considered (e.g., formation of less soluble lauryl sulfate salts in the presence of SLS or reduced hydration of Eudragit RS in the presence of chloride ions in the dissolution medium). In summary, a robust understanding of the experimental dissolution data is required to ensure the development of a meaningful dissolution model able to capture the in vivo performance in a mechanistic manner. To facilitate this process, the critical aspects to consider are summarized in , which may serve as a checklist in the context of in vitro data evaluation for the dissolution model development. 3.3.2.5 Q5: What Is the Appropriate Quality and Quantity of Data to Be Generated to Allow Dissolution Model Validation? The quality of data is defined by the evaluation of potential factors to consider which may introduce a bias to the dissolution measurement as shown in the check-list for in vitro data evaluation prior to dissolution model development , leading to the list of necessary input parameters needed for dissolution modeling . In terms of quantity, there is no definite number at this stage, but it seems that n = 3 different conditions covering the physiological pH range could be sufficient. Care should be taken to obtain adequate release profiles in each dissolution method (see Q1) and to favor dissolution methods where the main component/parameter in the dissolution medium/method influencing drug product dissolution is integrated. For example, for large particles or extended-release matrixes, dissolution data with different agitation rates often provide insight into the release mechanism. For drug substances that are sensitive to pH, covering the physiological pH range is typical. Finally, for drugs that are sensitive to the presence of surfactants in the medium, a comparison of dissolution profiles with synthetic and natural occurring surfactants is warranted. Q1: What Is the Appropriate Dissolution Model for an IR Formulation? A recent review by Anand et al. showed that direct input, Weibull function, Z-factor, or P-PSD approaches were widely applied methods for integrating dissolution in PBBM. Mechanistic approaches like the Z-factor or the P-PSD were mostly used for low-solubility products, and mechanistic methods were applied in 60% of the 27 case studies. The advantages of mechanistic dissolution models over Weibull functions is that the between- and within-subject variability in terms of in vivo dissolution during population modeling can be captured in a more relevant way. Instead of applying random variation of dissolution (as can be achieved with a Weibull function), mechanistic models will rely on variation in system parameters (e.g., volumes, pH, transit times, composition in bile salts) to recalculate a different in vivo dissolution for the drug product for each simulation. This will warrant closer to reality in vivo dissolution compared to random variations. Also, the use of mechanistic models is the only option when the model is to be used to predict the impact of prandial state, pH related DDI, or in vivo dissolution across different populations, all situations where the GI physiological changes may profoundly affect in vivo dissolution rate and make it deviate from the dissolution rate measured in vitro. The criteria to select a dissolution method should therefore be driven by the understanding of the drug product release mechanism and the limitations to in vitro and in vivo dissolution, the impact of manufacturing process and formulation on dissolution, and how well this can be simulated with a given approach. For mechanistic models, it is recommended to generate dissolution data with the same batch in several media/conditions to be able to verify the choice of model and prediction performance in vitro prior to integration of the batch specific data (Z-factor or P-PSD) in the model. Ideally, to perform the fitting of dissolution data to extract the Z-factor or P-PSD, the method chosen would be discriminative, and the batch dissolution would show an adequate profile with possibly full dissolution in the medium considered. Practically, this would correspond to picking a dissolution method where most measured data comprise between 20% and 80% drug dissolved. Typically, a 1×-dissolution method described by Kuiper, where the drug dose divided by the dissolution volume nears the drug solubility in the dissolution medium, ensures maximal discrimination while allowing full dissolution. Using only one method to fit a mechanistic dissolution model over using all dissolution methods simultaneously is optimal, as the integration of nondiscriminating methods may lead to bias in the batch specific Z-factor or P-PSD determination. Based on the strengths and limitations of each individual dissolution modeling method presented during breakout session C, a decision tree for dissolution model selection was discussed with the audience. The proposed decision tree provides considerations for developing a dissolution model depending on the disintegration properties of the dosage form, the occurrence of coning or sedimentation during dissolution testing, and the sensitivity of the dissolution rate toward changes in agitation conditions, volume, dose, and pH, as well as the presence of surfactant in the dissolution medium. The proposed decision tree is tailored to oral IR dosage forms and presents a clear description of the modeling assumptions to be considered when selecting a dissolution model. There was general agreement from the attendees that such a decision tree for dissolution model selection provides a valuable tool for both biopharmaceutics modelers in the pharmaceutical industry as well as for regulators when reviewing submitted PBBM cases . Q2: What Are the Input Parameters Required to Mechanistically Evaluate the in Vitro Dissolution Data? When developing a mechanistic dissolution model, the availability of high-quality input data for model parametrization should be a priority. This includes the availability of a sufficient number of in vitro dissolution profiles collected under relevant experimental conditions depending on the intended purpose of the model. For example, if the PBBM aims at predicting a pH-related DDI, then the dissolution model may need to be developed and validated using in vitro data generated under various pH conditions. Defining the experimental parameters describing the dissolution setup is prudent for each corresponding dissolution data set, and for dissolution media including surfactants, the properties of the micellar system should also be adequately characterized. presents a list of suggested data to collect and could serve as a checklist in the context of the dissolution model development. In addition to the in vitro data that are generated for direct input into the dissolution model, there might be a need to generate supplementary data to support some specific modeling assumptions or to mechanistically explain some anomalies. For example, if the slow dissolution in pure aqueous systems is attributed to poor drug wettability, this hypothesis may be strengthened by the generation of in vitro dissolution data, including a surfactant. Similarly, if in vitro dissolution is slow, presumably due to poor tablet disintegration, the hypothesis may be further supported by the generation of in vitro dissolution profiles of the pure DS or of drug product intermediates (granules or final blend prior to tablet compression). Such mechanistic investigations may not directly feed into the model but provide key information to increase the confidence in the selected model parameters and modeling assumptions. Q3: What Are the Criteria and Acceptable Thresholds for in Vitro Dissolution Model Validation? If more than one mechanistic modeling method may be applicable, the calculation of model performance indicators such as the average fold error (AFE) and absolute average fold error (AAFE) can provide rationale for method choice. Ultimately, the prediction performances of various dissolution modeling methods in the PBBM could also be compared. Examples of dissolution modeling fitting and impact on PBBM prediction are also shared. The outcome can be found in the Supporting Information . Q4: Which Are the Factors to Be Considered When Modeling Dissolution? Prior to the integration of dissolution data into a PBBM, a critical assessment of the quality and relevance of the experimental dissolution data may be useful. In this context, there are several factors to pay attention to, as summarized below. Agitation: The impact of agitation should be considered when choosing an integration method. All models are derived from the Noyes-Whitney equation (i.e., Johnson, , Wang-Flanagan, Takano, Gamsiz, Pepin, or Salehi, ) and rely on the definition of the UWL thickness around dissolving particles. The UWL thickness is a function of fluid velocity around the dissolving particle in the dissolution medium (in vitro and in vivo). When the fluid velocity tends to zero, the thickness of the UWL tends to the radius of the spherical particle; as an approximation, the UWL thickness is equal to the particle radius up to an upper limit of 30 μm, which is supported by simulations and experiments performed in the literature. , Also, this hypothesis fits with the low fluid velocity typically measured in vivo throughout the GI tract, where the average velocity is in the range of 1–2 cm/s, with transient peak velocities of more than 15 cm/s. − For particle sizes larger than 30 μm, the UWL thickness typically depends on the agitation as shown for example by Scholz et al. When a significant impact of agitation on the dissolution rate is shown, the in vitro dissolution model should accommodate the impact of hydrodynamics. Surface pH and Surface Solubility: When the drug shows acidic or basic moieties, depending on the pH and composition of the aqueous dissolution medium, an acid–base reaction can happen locally at the surface of the dissolving drug particles, without necessarily affecting the bulk pH. This reaction will change the pH within the UWL. The maximal changes are observed at the surface of the drug. This phenomenon was described theoretically and experimentally in the literature for weak acids, bases, and their salts thanks to the work of Higuchi et al., , Mooney et al., , and Serajuddin et al. , Since the drug surface solubility drives the dissolution rate, it is imperative to consider the drug surface solubility to mechanistically model in vitro and in vivo dissolution rates. , , , If there is a rapid phase change, such as salt disproportionation to the free base, then the free base surface solubility at the medium pH should be determined. Surface pH, also known as microenvironmental pH, is driven by the drug substance but can also be largely influenced by excipients added to the formulation, , and excipients should be considered when analyzing dissolution data. Formulation composition should always be known so as to evaluate potential interactions between the drug and excipients during dissolution but also in the solid state, as these reactions can also lead to polymorphic transitions. Chemical Degradation: Chemical degradation can happen during dissolution and impact the amount of drug that is dissolved. A typical example is that of rifampicin dissolution in presence of or without isoniazid. The presence of bell shape dissolution curves or the existence of a dissolution plateau less than that of the theoretical batch assay could indicate the potential for in vitro degradation. The degradation rate should be measured in a separate experiment with solubilized drug by measuring the drug concentration over time in the dissolution medium. If degradation is confirmed, it can be integrated into the model (in vitro and in vivo) to account for a better fit of in vitro dissolution and amount of drug available for in vivo absorption. Physical Degradation: Bell shapes or plateaus during dissolution may also demonstrate (beyond the lack of enough solubility or medium volume to dissolve the full drug dose) that a polymorphic drug transition happens or that there is a polymorphic impurity in the drug substance. For example, the mixture of different polymorphic forms with different solubility values will lead to a variation in the rate and extent of dissolution. Precipitation from an amorphous to a crystalline form, or from a salt/cocrystal to its free form, will lead to a change in dissolution rate or even to complete stop of drug dissolution if the precipitation occurs on the surface of the drug product. , The presence of cosolvents or polymers can also change the rate and extent of surface precipitation, and, where relevant, such excipients should be considered critical to the product performance. Drug Product Disintegration: The impact of capsule opening, , or tablet disintegration, on the dissolution profile has been widely presented in the literature. Since dissolution models assume that all the drug particles are available at time zero for dissolution, the disintegration time or capsule opening time should be removed from the observed dissolution data prior to fitting the dissolution rate. This can be achieved by subtracting the time needed for drug release from the observed dissolution time. If possible, models for capsule opening and tablet disintegration should be fitted to in vitro data and applied to in vivo data. It is also known that in vivo capsule opening, or in vivo tablet disintegration, , takes longer than the time observed during USP disintegration testing and would impact gastric residence in vivo. Method Artificial Effects: In addition to the intrinsic properties of the drug substance and drug product described above, the in vitro dissolution performance may be affected by artificial effects in the in vitro dissolution setup, which may not necessarily have relevance for in vivo dissolution. Such effects include in vitro sedimentation or coning and the interaction with components of the dissolution medium. In vitro sedimentation introduces a bias to the dissolution rate and extent and should be corrected prior to PBBM introduction. The solubility product of ionizable compounds in the presence of specific buffer salts and/or surfactants should be carefully considered (e.g., formation of less soluble lauryl sulfate salts in the presence of SLS or reduced hydration of Eudragit RS in the presence of chloride ions in the dissolution medium). In summary, a robust understanding of the experimental dissolution data is required to ensure the development of a meaningful dissolution model able to capture the in vivo performance in a mechanistic manner. To facilitate this process, the critical aspects to consider are summarized in , which may serve as a checklist in the context of in vitro data evaluation for the dissolution model development. Q5: What Is the Appropriate Quality and Quantity of Data to Be Generated to Allow Dissolution Model Validation? The quality of data is defined by the evaluation of potential factors to consider which may introduce a bias to the dissolution measurement as shown in the check-list for in vitro data evaluation prior to dissolution model development , leading to the list of necessary input parameters needed for dissolution modeling . In terms of quantity, there is no definite number at this stage, but it seems that n = 3 different conditions covering the physiological pH range could be sufficient. Care should be taken to obtain adequate release profiles in each dissolution method (see Q1) and to favor dissolution methods where the main component/parameter in the dissolution medium/method influencing drug product dissolution is integrated. For example, for large particles or extended-release matrixes, dissolution data with different agitation rates often provide insight into the release mechanism. For drug substances that are sensitive to pH, covering the physiological pH range is typical. Finally, for drugs that are sensitive to the presence of surfactants in the medium, a comparison of dissolution profiles with synthetic and natural occurring surfactants is warranted. BO Session D - Precipitation: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Christian Wagner (Merck Healthcare KGaA, Darmstadt, Germany) and was led by Poonam Delvadia (FDA) and Mark McAllister (Pfizer), with André Dallmann (Bayer) and Elizabeth Gray (FDA) as scribes. 3.4.1 Presentation: To Precipitate or Not to Precipitate, That Is the Question! Loosely adapted from Shakespeare’s Hamlet, pharmaceutical scientists have been asking this question for decades, because drug precipitation in the small intestine can affect the rate and/or extent of oral drug absorption. This, in turn, can contribute to PK variability and can jeopardize the efficacy of an orally administered drug. Thus, there is a huge need for predictive tools to assess the impact of potential drug precipitation on the absorption of orally administered drugs. , Drug precipitation typically occurs from a supersaturated state, i.e., when the solubility of the drug exceeds its thermodynamic solubility. Weakly basic drugs are especially susceptible to drug precipitation because their solubility is markedly higher in the (fasted) stomach than in the small intestine. Upon gastric emptying of dissolved drug into the small intestine, the drug’s solubility drops, and molecule clusters form, grow, and precipitate once a critical cluster size is reached (nucleation and growth theory). Besides weakly basic drugs, supersaturating formulations such as ASDs and self-(micro) emulsifying drug delivery systems (S(M)EDDSs) can also be subject to intestinal drug precipitation. Whether or not a drug precipitates thus depends on several drug, formulation, and physiological factors. In any case, the driver of drug precipitation is the reduction of free energy in the system. − Its complex nature underlines the need for tools that reliably predict luminal drug precipitation, allowing for the translation of results from the lab (in vitro) into the clinics (in vivo) via PBBM tools (in silico). During recent years, various in vitro precipitation assays have been developed. These assays can be applied throughout the development cycle of a drug, i.e., from early research through life-cycle management. The commonality of most of the in vitro assays is that they strive to simulate physiological conditions by transferring a drug solution or suspension from an artificial stomach (donor) into an artificial small intestine (acceptor) compartment. The concentration of dissolved drug can be measured by various techniques, such as liquid chromatography or in-line UV–vis. − On the one hand, small-scale assays are typically used to investigate the precipitation behavior of the drug in a typical preformulation setting, i.e., using small quantities of the drug substance , − On the other hand, large scale models typically use physiologically relevant gastric and intestinal fluid volumes, which allows for performance-testing of formulations. , − More advanced models, which aim at simulating the interplay between drug precipitation and absorption, have also been published. , − Of note, a drug can precipitate as crystalline or amorphous form(s), which, in turn, can impact the rate and extent of redissolution of the precipitate. Likewise, the particle size of the precipitate can also impact its redissolution kinetics. , A well-known example of amorphous precipitation is gefitinib, which was shown to precipitate in an amorphous state and then slowly recrystallize. Whenever possible, characterizing the solid state of the precipitated drug, testing for redissolution, and adapting the PBBM accordingly would be a viable approach. Despite significant advances during the past 20 years, all in vitro systems to predict drug precipitation remain highly artificial, as they are not capable of reflecting the complex nature of human anatomy and physiology in its totality. The comparably high number of in vitro precipitation assays described in the literature indicates a lack of harmonization/standardization, especially since the selection of a suitable in vitro precipitation model seems to be a case-by-case decision, depending on the drug and formulation properties. A “universal” in vitro model capable of simulating luminal drug precipitation for a wide variety of compounds and at various conditions (dose, prandial, or disease state, formulation) would increase confidence in in vitro-based precipitation predictions. In addition to in vitro precipitation assays, luminal sampling from volunteers or clinical PK data can also be used to deduce whether a drug may be prone to precipitation. , , For example, if PK data from a well-designed single ascending dose study indicate linearity in relevant PK parameters such as AUC, C max , and elimination (no flip-flop kinetics), the impact of precipitation on drug absorption becomes unlikely. In contrast, nonlinear AUC or C max , or a pronounced shift in t max , may indicate nonlinear absorption, potentially deriving from solubility/dissolution limitations and/or drug precipitation. Time-dependent effects, nonlinear clearance mechanisms, disease state (healthy volunteer vs patients), changes in dose and/or formulation, and other confounding factors should be taken into consideration when deducing precipitation characteristics from clinical PK data. In contrast to in vitro data, mechanistic insights into the precipitation process cannot be gained from in vivo data because in vivo data typically do not give mechanistic insights into drug precipitation. Therefore, parameter identification remains a potential issue when precipitation characteristics are deduced from clinical data. To translate insights from drug precipitation into a meaningful prediction and potentially extrapolate to untested scenarios, the results from an in vitro precipitation study (including solid state and redissolution characterization of the precipitate) or a clinical PK trial (including luminal aspiration studies) can be used to inform a PBBM. , , , , − This translational, integrative approach permits the prediction of luminal drug precipitation at various doses and prandial states and for different formulations. Commercially available PBBM tools typically offer two possibilities of applying precipitation kinetics to the simulations, i.e., by applying a simplistic precipitation rate constant or time, combined with supersaturation, or by applying a mechanistic nucleation and growth model. , , , The latter approach allows for the mechanistic simulation of drug precipitation by fitting nucleation and growth parameters to in vitro or in vivo data. From a scientific perspective, in vitro precipitation setups should be suited to extract nucleation and growth parameters for use as input for a PBBM. However, the low number of publications describing the application of software built-in mechanistic precipitation tools indicates that the advantage of applying these tools as part of a commercially available PBBM suite still needs to be demonstrated. “To precipitate, or not to precipitate” – this question remains unanswered, at least partly. As has been discussed in the scientific community previously, the results of this workshop also revealed that our currently available in vitro tools to predict drug precipitation are often lacking “universal” predictive power, because there is no in vitro tool currently available, which is capable of predicting drug precipitation (or the lack thereof) for a wide variety of drugs and formulations. Likewise, there are still significant knowledge gaps, for example, with respect to our understanding of the impact of GI hydrodynamics and transit rates (including the “Magenstraße”), distribution of fluid pockets, impact of intestinal mucus, and transporter effects on luminal drug precipitation. Understanding these properties would aid in developing improved in vitro precipitation setups and more predictive PBBM tools. PBBM tools should benefit from ongoing advances in scientific research and constantly be updated with state-of-the-art knowledge. Despite significant improvements during the past decades in terms of in vitro methodology to test for drug precipitation, computational and software capabilities to model it, and knowledge about the anatomy and physiology of the human GI tract (which, beside the drug properties itself, affect the rate and extent of drug precipitation), predicting drug precipitation is still associated with a high degree of uncertainty, especially for drugs with impaired absorption. For this purpose, a decision tree on how to test for drug precipitation and apply it to a PBBM was presented during the workshop . The decision tree is adapted based on recommendations from a previous publication and reflects the general workflow applied to precipitation predictions in PBBMs in one of the IQ working group’s member companies (Merck Healthcare KGaA, Darmstadt, Germany). As clinical PK data are thought to provide the highest evidence on impaired drug absorption, evoked by, e.g., drug precipitation, the starting point of the decision tree is the question of the availability of clinical PK data. The left side of the decision tree (“no clinical data available”) describes bottom-up in vitro methods to deduce precipitation parameters for the PBBM input. Given the lack of a “universal” precipitation assay, the decision tree does not recommend using a particular in vitro assay to predict drug precipitation. Instead, it leaves the discretion of the biopharmaceutical scientist to decide on a suitable assay. One key element of the decision tree is the recommendation to apply precipitation scenarios to the PBBM. For example, the modeler could apply a “no versus a moderate precipitation scenario” (in vitro setup indicates no or very modest precipitation) or a “moderate versus a high precipitation scenario” (in vitro setup indicates precipitation). This approach mitigates the uncertainties associated with many in vitro precipitation assays, particularly their tendency to overpredict drug precipitation. The right side (“clinical data available”) describes a top-down method for deducing precipitation kinetics, i.e., the analysis of clinical PK data. The key to reliably deduce precipitation parameters is the availability of high-quality PK data, e.g., from a dose escalation study, which would ideally be conducted in healthy volunteers. Other confounding factors, such as nonlinear clearance mechanisms or time-dependent effects, should be excluded. One drawback of the top-down approach is the lack of parameter identification (e.g., individual impact of drug dissolution, precipitation, and redissolution on the PK profile); i.e., this approach is a nonmechanistic one. The decision tree presented herein considers the above-mentioned uncertainties around the in vitro and in silico prediction of drug precipitation. It can be flexibly adapted based on specific needs and can be refined continuously based on future scientific advancements. Therefore, the decision tree should be understood as a practical tool rather than a strict “operating procedure”. 3.4.2 Discussion After the presentation, the audience was guided by Mark and Poonam to discuss the five highlighted questions below. 3.4.2.1 Q1: Which Limitations of Commonly Used in Vitro Precipitation Assays Based on Transfer Methodology Can Be Addressed by an Improved Experimental Design? The design of in vitro precipitation assays should be based on the intended application and what data are required; for example, is the assay being used to perform formulation ranking or for informing PBBM input? There was a debate around the criticality of integrating a permeability-like component within the in vitro precipitation assay, particularly for compounds with high permeability. As a general concept, it was suggested that the thoughtful inclusion of a well-designed permeability component (absorption compartment) in the in vitro dissolution assay would be expected to help with generating more accurate quantitative predictions and rank orders for formulations. However, it was also recognized that the practical limitations for modifying in vitro assays to accurately simulate in vivo permeability were significant. Biphasic dissolution assays that are designed in a two-stage manner (e.g., addition of the lipid phase and pH shift after 30 min to reflect the transfer from stomach to the upper intestine) were also considered by some participants as an improved method. It is also important to understand what the solid state of the precipitant is for modeling. The particle size (distribution) of the precipitate(s) should ideally be measured in vitro so that it can be included in a PBBM, along with the measurement of pH values and whether they have changed to account for these inputs in the model. It was suggested that precipitated material be isolated and dissolution measured to accurately characterize the redissolution performance. It was also suggested that two-phasic and/or transfer computational models can be used as a good approach when attempting to correlate in vitro and in vivo supersaturation concentrations. Another member in the audience from industry stated that different methodologies are used based on whether they are looking at the drug product or the drug substance. The totality of data obtained from different in vitro experiments should then be considered. Though it is always difficult to incorporate a permeability component with in vitro systems, a complex model with an absorptive component has been helpful. The audience seemed to agree that how you present a drug to an absorptive surface area in vitro is very important because in vitro modeling can overestimate concentrations at which precipitation occurs. For many compounds in developmental stages, though early precipitation data may have raised a red flag, usually those early precipitation risks are not as limiting as predicted by in vitro data; therefore, should we consider permeability to be a saver for some drugs that precipitate? This again stresses the importance of including an absorption compartment in the in vitro dissolution assay. Ultimately, while there are many different transfer models used to measure the rate of precipitation, there is not a one size fits all approach, as the complexity of the assay required depends upon the question (e.g., drug precipitation propensity, impact of formulation, etc.) that we are asking. 3.4.2.2 Q2: Can We Identify the Class of Compounds for Which the Need to Integrate a Permeation-Like Process in the Precipitation Assay Is Essential for Accurate Estimation of Precipitation, and What Are the Recommended Experimental Options for This? It was suggested to build a data set of molecules across the range of physicochemical space to define supersaturation and precipitation performance that could be used in verifying models. It was noted that a number of compounds had been studied during the IMI OrBiTo project and a recent review that summarizes the available human data from intubation for a large number of molecules could be a useful starting point for such a database. 3.4.2.3 Q3: What Are the Options/Best Practices for Characterizing (Or Predicting) Precipitated Material Attributes (Form, Particle Size, and Solubility) for Accurate Input to PBBM? Initially, an attendee in the audience stated that prior to looking into the software capabilities samples should be collected so that the solid state of the precipitate and its particle size can be determined and measured. Though many agreed, based on the responses from industry, this is not a common practice. Some industry representatives reported that precipitated material attributes are nowadays increasingly characterized, but concerns were raised about whether enough precipitated material could be obtained for analysis. However, drugs may precipitate as amorphous forms, which are known to exhibit higher solubility, or as crystalline forms that exhibit lower solubility. An example of gefitinib was discussed and shows that gefitinib precipitates in an amorphous form that converts to a crystalline form. This example underscores the importance of understanding the solid-state characteristics for modeling. Nevertheless, the question remains: What is the best approach (mechanistic or descriptive) given that there is no standard practice? Further discussion centered around redissolution, which can be used to back-calculate particle size. It was stated that this approach is easier than measuring the particle size. A series of experiments conducted with posaconazole were also discussed, as in vitro experiments using the transfer assay showed an aggregate structure that was not crystalline or amorphous. , More specifically, the obtained phase-separated species appeared to be metastable, reaching a plateau above the thermodynamic solubility but below the supersaturated state. The attributes of this phase-separated species could not be further elucidated. This observation challenges the current practice of in vitro to in vivo translation; can we assume from these studies that what happens in vitro translates to in vivo? As in vivo particles do not grow in an isolated medium, they might have attributes different from those of precipitates isolated from in vitro experiments. There was also some discussion about overpredicting precipitation, as ketoconazole precipitates strongly in vitro, but in vivo, it was determined that only about 10% of the dose precipitated. It was again stressed that a curated set of case examples with well understood in vivo behavior would be helpful to define parameters that need to be better characterized in vitro. 3.4.2.4 Q4: What Are the Best Practices for Modeling Precipitation under Physiologically Relevant Luminal Conditions–First Order Fixed Rate Constant/Mechanistic Nucleation and Growth Predictions in Dynamic pH/Fluid Volumes? The first approach brought up was a bottom-up approach, in which the kinetics observed in the in vitro experiment are modeled. Subsequently, the dissolution–precipitation model is integrated in a PBBM framework via IVIVE to simulate the behavior in vivo. This approach was preferred over a top-down approach, where precipitation kinetics are fitted to observed PK data. From a physical and mechanistic modeling perspective, it was considered valuable to separate processes involved in dissolution and precipitation from each other, measure them individually, and then combine all of the individual mechanisms in a model to obtain an improved outcome. A question arose regarding whether anyone has used the emptying half-life in modeling and then investigated variability? Similarly, it was emphasized that physiological variability needs to be accounted for in addition to the variability associated with the pharmaceutical performance of the delivery system in the PBBM. Given the extreme interindividual variability in parameters related to precipitation, population simulations will likely cover the whole range of precipitation constants. Norvir (ritonavir formulated as ASD tablet) was given as an example where interindividual variability should be considered. Additionally, in the case of a precipitation risk, consideration should be given to mitigate this risk through the use of precipitation inhibitors or by using a salt of the drug. The latter option might be an alternative to more complex bioenhancement systems like ASD formulations. One answer referenced tacrolimus (an ASD) in which the precipitation risk was mitigated through formulation; however, it should always come down to an understanding of the biopharmaceutics risk. 3.4.2.5 Q5: How Can Precipitation from Supersaturating Delivery Systems, Such as ASDs, Be Modeled? What Options Are Available to Account for Complex Speciation, Including Liquid–liquid Phase-Separated Nanodroplets? This is particularly challenging and something that requires further work due to the complexities that arise with the presence of polymer and surfactants, for example, which make prediction difficult. Mass transfer models should account for the mixed speciation of the drug. The consensus in the room was that it needs to be guided by the accurate in vitro performance of a supersaturating system. Presentation: To Precipitate or Not to Precipitate, That Is the Question! Loosely adapted from Shakespeare’s Hamlet, pharmaceutical scientists have been asking this question for decades, because drug precipitation in the small intestine can affect the rate and/or extent of oral drug absorption. This, in turn, can contribute to PK variability and can jeopardize the efficacy of an orally administered drug. Thus, there is a huge need for predictive tools to assess the impact of potential drug precipitation on the absorption of orally administered drugs. , Drug precipitation typically occurs from a supersaturated state, i.e., when the solubility of the drug exceeds its thermodynamic solubility. Weakly basic drugs are especially susceptible to drug precipitation because their solubility is markedly higher in the (fasted) stomach than in the small intestine. Upon gastric emptying of dissolved drug into the small intestine, the drug’s solubility drops, and molecule clusters form, grow, and precipitate once a critical cluster size is reached (nucleation and growth theory). Besides weakly basic drugs, supersaturating formulations such as ASDs and self-(micro) emulsifying drug delivery systems (S(M)EDDSs) can also be subject to intestinal drug precipitation. Whether or not a drug precipitates thus depends on several drug, formulation, and physiological factors. In any case, the driver of drug precipitation is the reduction of free energy in the system. − Its complex nature underlines the need for tools that reliably predict luminal drug precipitation, allowing for the translation of results from the lab (in vitro) into the clinics (in vivo) via PBBM tools (in silico). During recent years, various in vitro precipitation assays have been developed. These assays can be applied throughout the development cycle of a drug, i.e., from early research through life-cycle management. The commonality of most of the in vitro assays is that they strive to simulate physiological conditions by transferring a drug solution or suspension from an artificial stomach (donor) into an artificial small intestine (acceptor) compartment. The concentration of dissolved drug can be measured by various techniques, such as liquid chromatography or in-line UV–vis. − On the one hand, small-scale assays are typically used to investigate the precipitation behavior of the drug in a typical preformulation setting, i.e., using small quantities of the drug substance , − On the other hand, large scale models typically use physiologically relevant gastric and intestinal fluid volumes, which allows for performance-testing of formulations. , − More advanced models, which aim at simulating the interplay between drug precipitation and absorption, have also been published. , − Of note, a drug can precipitate as crystalline or amorphous form(s), which, in turn, can impact the rate and extent of redissolution of the precipitate. Likewise, the particle size of the precipitate can also impact its redissolution kinetics. , A well-known example of amorphous precipitation is gefitinib, which was shown to precipitate in an amorphous state and then slowly recrystallize. Whenever possible, characterizing the solid state of the precipitated drug, testing for redissolution, and adapting the PBBM accordingly would be a viable approach. Despite significant advances during the past 20 years, all in vitro systems to predict drug precipitation remain highly artificial, as they are not capable of reflecting the complex nature of human anatomy and physiology in its totality. The comparably high number of in vitro precipitation assays described in the literature indicates a lack of harmonization/standardization, especially since the selection of a suitable in vitro precipitation model seems to be a case-by-case decision, depending on the drug and formulation properties. A “universal” in vitro model capable of simulating luminal drug precipitation for a wide variety of compounds and at various conditions (dose, prandial, or disease state, formulation) would increase confidence in in vitro-based precipitation predictions. In addition to in vitro precipitation assays, luminal sampling from volunteers or clinical PK data can also be used to deduce whether a drug may be prone to precipitation. , , For example, if PK data from a well-designed single ascending dose study indicate linearity in relevant PK parameters such as AUC, C max , and elimination (no flip-flop kinetics), the impact of precipitation on drug absorption becomes unlikely. In contrast, nonlinear AUC or C max , or a pronounced shift in t max , may indicate nonlinear absorption, potentially deriving from solubility/dissolution limitations and/or drug precipitation. Time-dependent effects, nonlinear clearance mechanisms, disease state (healthy volunteer vs patients), changes in dose and/or formulation, and other confounding factors should be taken into consideration when deducing precipitation characteristics from clinical PK data. In contrast to in vitro data, mechanistic insights into the precipitation process cannot be gained from in vivo data because in vivo data typically do not give mechanistic insights into drug precipitation. Therefore, parameter identification remains a potential issue when precipitation characteristics are deduced from clinical data. To translate insights from drug precipitation into a meaningful prediction and potentially extrapolate to untested scenarios, the results from an in vitro precipitation study (including solid state and redissolution characterization of the precipitate) or a clinical PK trial (including luminal aspiration studies) can be used to inform a PBBM. , , , , − This translational, integrative approach permits the prediction of luminal drug precipitation at various doses and prandial states and for different formulations. Commercially available PBBM tools typically offer two possibilities of applying precipitation kinetics to the simulations, i.e., by applying a simplistic precipitation rate constant or time, combined with supersaturation, or by applying a mechanistic nucleation and growth model. , , , The latter approach allows for the mechanistic simulation of drug precipitation by fitting nucleation and growth parameters to in vitro or in vivo data. From a scientific perspective, in vitro precipitation setups should be suited to extract nucleation and growth parameters for use as input for a PBBM. However, the low number of publications describing the application of software built-in mechanistic precipitation tools indicates that the advantage of applying these tools as part of a commercially available PBBM suite still needs to be demonstrated. “To precipitate, or not to precipitate” – this question remains unanswered, at least partly. As has been discussed in the scientific community previously, the results of this workshop also revealed that our currently available in vitro tools to predict drug precipitation are often lacking “universal” predictive power, because there is no in vitro tool currently available, which is capable of predicting drug precipitation (or the lack thereof) for a wide variety of drugs and formulations. Likewise, there are still significant knowledge gaps, for example, with respect to our understanding of the impact of GI hydrodynamics and transit rates (including the “Magenstraße”), distribution of fluid pockets, impact of intestinal mucus, and transporter effects on luminal drug precipitation. Understanding these properties would aid in developing improved in vitro precipitation setups and more predictive PBBM tools. PBBM tools should benefit from ongoing advances in scientific research and constantly be updated with state-of-the-art knowledge. Despite significant improvements during the past decades in terms of in vitro methodology to test for drug precipitation, computational and software capabilities to model it, and knowledge about the anatomy and physiology of the human GI tract (which, beside the drug properties itself, affect the rate and extent of drug precipitation), predicting drug precipitation is still associated with a high degree of uncertainty, especially for drugs with impaired absorption. For this purpose, a decision tree on how to test for drug precipitation and apply it to a PBBM was presented during the workshop . The decision tree is adapted based on recommendations from a previous publication and reflects the general workflow applied to precipitation predictions in PBBMs in one of the IQ working group’s member companies (Merck Healthcare KGaA, Darmstadt, Germany). As clinical PK data are thought to provide the highest evidence on impaired drug absorption, evoked by, e.g., drug precipitation, the starting point of the decision tree is the question of the availability of clinical PK data. The left side of the decision tree (“no clinical data available”) describes bottom-up in vitro methods to deduce precipitation parameters for the PBBM input. Given the lack of a “universal” precipitation assay, the decision tree does not recommend using a particular in vitro assay to predict drug precipitation. Instead, it leaves the discretion of the biopharmaceutical scientist to decide on a suitable assay. One key element of the decision tree is the recommendation to apply precipitation scenarios to the PBBM. For example, the modeler could apply a “no versus a moderate precipitation scenario” (in vitro setup indicates no or very modest precipitation) or a “moderate versus a high precipitation scenario” (in vitro setup indicates precipitation). This approach mitigates the uncertainties associated with many in vitro precipitation assays, particularly their tendency to overpredict drug precipitation. The right side (“clinical data available”) describes a top-down method for deducing precipitation kinetics, i.e., the analysis of clinical PK data. The key to reliably deduce precipitation parameters is the availability of high-quality PK data, e.g., from a dose escalation study, which would ideally be conducted in healthy volunteers. Other confounding factors, such as nonlinear clearance mechanisms or time-dependent effects, should be excluded. One drawback of the top-down approach is the lack of parameter identification (e.g., individual impact of drug dissolution, precipitation, and redissolution on the PK profile); i.e., this approach is a nonmechanistic one. The decision tree presented herein considers the above-mentioned uncertainties around the in vitro and in silico prediction of drug precipitation. It can be flexibly adapted based on specific needs and can be refined continuously based on future scientific advancements. Therefore, the decision tree should be understood as a practical tool rather than a strict “operating procedure”. Discussion After the presentation, the audience was guided by Mark and Poonam to discuss the five highlighted questions below. 3.4.2.1 Q1: Which Limitations of Commonly Used in Vitro Precipitation Assays Based on Transfer Methodology Can Be Addressed by an Improved Experimental Design? The design of in vitro precipitation assays should be based on the intended application and what data are required; for example, is the assay being used to perform formulation ranking or for informing PBBM input? There was a debate around the criticality of integrating a permeability-like component within the in vitro precipitation assay, particularly for compounds with high permeability. As a general concept, it was suggested that the thoughtful inclusion of a well-designed permeability component (absorption compartment) in the in vitro dissolution assay would be expected to help with generating more accurate quantitative predictions and rank orders for formulations. However, it was also recognized that the practical limitations for modifying in vitro assays to accurately simulate in vivo permeability were significant. Biphasic dissolution assays that are designed in a two-stage manner (e.g., addition of the lipid phase and pH shift after 30 min to reflect the transfer from stomach to the upper intestine) were also considered by some participants as an improved method. It is also important to understand what the solid state of the precipitant is for modeling. The particle size (distribution) of the precipitate(s) should ideally be measured in vitro so that it can be included in a PBBM, along with the measurement of pH values and whether they have changed to account for these inputs in the model. It was suggested that precipitated material be isolated and dissolution measured to accurately characterize the redissolution performance. It was also suggested that two-phasic and/or transfer computational models can be used as a good approach when attempting to correlate in vitro and in vivo supersaturation concentrations. Another member in the audience from industry stated that different methodologies are used based on whether they are looking at the drug product or the drug substance. The totality of data obtained from different in vitro experiments should then be considered. Though it is always difficult to incorporate a permeability component with in vitro systems, a complex model with an absorptive component has been helpful. The audience seemed to agree that how you present a drug to an absorptive surface area in vitro is very important because in vitro modeling can overestimate concentrations at which precipitation occurs. For many compounds in developmental stages, though early precipitation data may have raised a red flag, usually those early precipitation risks are not as limiting as predicted by in vitro data; therefore, should we consider permeability to be a saver for some drugs that precipitate? This again stresses the importance of including an absorption compartment in the in vitro dissolution assay. Ultimately, while there are many different transfer models used to measure the rate of precipitation, there is not a one size fits all approach, as the complexity of the assay required depends upon the question (e.g., drug precipitation propensity, impact of formulation, etc.) that we are asking. 3.4.2.2 Q2: Can We Identify the Class of Compounds for Which the Need to Integrate a Permeation-Like Process in the Precipitation Assay Is Essential for Accurate Estimation of Precipitation, and What Are the Recommended Experimental Options for This? It was suggested to build a data set of molecules across the range of physicochemical space to define supersaturation and precipitation performance that could be used in verifying models. It was noted that a number of compounds had been studied during the IMI OrBiTo project and a recent review that summarizes the available human data from intubation for a large number of molecules could be a useful starting point for such a database. 3.4.2.3 Q3: What Are the Options/Best Practices for Characterizing (Or Predicting) Precipitated Material Attributes (Form, Particle Size, and Solubility) for Accurate Input to PBBM? Initially, an attendee in the audience stated that prior to looking into the software capabilities samples should be collected so that the solid state of the precipitate and its particle size can be determined and measured. Though many agreed, based on the responses from industry, this is not a common practice. Some industry representatives reported that precipitated material attributes are nowadays increasingly characterized, but concerns were raised about whether enough precipitated material could be obtained for analysis. However, drugs may precipitate as amorphous forms, which are known to exhibit higher solubility, or as crystalline forms that exhibit lower solubility. An example of gefitinib was discussed and shows that gefitinib precipitates in an amorphous form that converts to a crystalline form. This example underscores the importance of understanding the solid-state characteristics for modeling. Nevertheless, the question remains: What is the best approach (mechanistic or descriptive) given that there is no standard practice? Further discussion centered around redissolution, which can be used to back-calculate particle size. It was stated that this approach is easier than measuring the particle size. A series of experiments conducted with posaconazole were also discussed, as in vitro experiments using the transfer assay showed an aggregate structure that was not crystalline or amorphous. , More specifically, the obtained phase-separated species appeared to be metastable, reaching a plateau above the thermodynamic solubility but below the supersaturated state. The attributes of this phase-separated species could not be further elucidated. This observation challenges the current practice of in vitro to in vivo translation; can we assume from these studies that what happens in vitro translates to in vivo? As in vivo particles do not grow in an isolated medium, they might have attributes different from those of precipitates isolated from in vitro experiments. There was also some discussion about overpredicting precipitation, as ketoconazole precipitates strongly in vitro, but in vivo, it was determined that only about 10% of the dose precipitated. It was again stressed that a curated set of case examples with well understood in vivo behavior would be helpful to define parameters that need to be better characterized in vitro. 3.4.2.4 Q4: What Are the Best Practices for Modeling Precipitation under Physiologically Relevant Luminal Conditions–First Order Fixed Rate Constant/Mechanistic Nucleation and Growth Predictions in Dynamic pH/Fluid Volumes? The first approach brought up was a bottom-up approach, in which the kinetics observed in the in vitro experiment are modeled. Subsequently, the dissolution–precipitation model is integrated in a PBBM framework via IVIVE to simulate the behavior in vivo. This approach was preferred over a top-down approach, where precipitation kinetics are fitted to observed PK data. From a physical and mechanistic modeling perspective, it was considered valuable to separate processes involved in dissolution and precipitation from each other, measure them individually, and then combine all of the individual mechanisms in a model to obtain an improved outcome. A question arose regarding whether anyone has used the emptying half-life in modeling and then investigated variability? Similarly, it was emphasized that physiological variability needs to be accounted for in addition to the variability associated with the pharmaceutical performance of the delivery system in the PBBM. Given the extreme interindividual variability in parameters related to precipitation, population simulations will likely cover the whole range of precipitation constants. Norvir (ritonavir formulated as ASD tablet) was given as an example where interindividual variability should be considered. Additionally, in the case of a precipitation risk, consideration should be given to mitigate this risk through the use of precipitation inhibitors or by using a salt of the drug. The latter option might be an alternative to more complex bioenhancement systems like ASD formulations. One answer referenced tacrolimus (an ASD) in which the precipitation risk was mitigated through formulation; however, it should always come down to an understanding of the biopharmaceutics risk. 3.4.2.5 Q5: How Can Precipitation from Supersaturating Delivery Systems, Such as ASDs, Be Modeled? What Options Are Available to Account for Complex Speciation, Including Liquid–liquid Phase-Separated Nanodroplets? This is particularly challenging and something that requires further work due to the complexities that arise with the presence of polymer and surfactants, for example, which make prediction difficult. Mass transfer models should account for the mixed speciation of the drug. The consensus in the room was that it needs to be guided by the accurate in vitro performance of a supersaturating system. Q1: Which Limitations of Commonly Used in Vitro Precipitation Assays Based on Transfer Methodology Can Be Addressed by an Improved Experimental Design? The design of in vitro precipitation assays should be based on the intended application and what data are required; for example, is the assay being used to perform formulation ranking or for informing PBBM input? There was a debate around the criticality of integrating a permeability-like component within the in vitro precipitation assay, particularly for compounds with high permeability. As a general concept, it was suggested that the thoughtful inclusion of a well-designed permeability component (absorption compartment) in the in vitro dissolution assay would be expected to help with generating more accurate quantitative predictions and rank orders for formulations. However, it was also recognized that the practical limitations for modifying in vitro assays to accurately simulate in vivo permeability were significant. Biphasic dissolution assays that are designed in a two-stage manner (e.g., addition of the lipid phase and pH shift after 30 min to reflect the transfer from stomach to the upper intestine) were also considered by some participants as an improved method. It is also important to understand what the solid state of the precipitant is for modeling. The particle size (distribution) of the precipitate(s) should ideally be measured in vitro so that it can be included in a PBBM, along with the measurement of pH values and whether they have changed to account for these inputs in the model. It was suggested that precipitated material be isolated and dissolution measured to accurately characterize the redissolution performance. It was also suggested that two-phasic and/or transfer computational models can be used as a good approach when attempting to correlate in vitro and in vivo supersaturation concentrations. Another member in the audience from industry stated that different methodologies are used based on whether they are looking at the drug product or the drug substance. The totality of data obtained from different in vitro experiments should then be considered. Though it is always difficult to incorporate a permeability component with in vitro systems, a complex model with an absorptive component has been helpful. The audience seemed to agree that how you present a drug to an absorptive surface area in vitro is very important because in vitro modeling can overestimate concentrations at which precipitation occurs. For many compounds in developmental stages, though early precipitation data may have raised a red flag, usually those early precipitation risks are not as limiting as predicted by in vitro data; therefore, should we consider permeability to be a saver for some drugs that precipitate? This again stresses the importance of including an absorption compartment in the in vitro dissolution assay. Ultimately, while there are many different transfer models used to measure the rate of precipitation, there is not a one size fits all approach, as the complexity of the assay required depends upon the question (e.g., drug precipitation propensity, impact of formulation, etc.) that we are asking. Q2: Can We Identify the Class of Compounds for Which the Need to Integrate a Permeation-Like Process in the Precipitation Assay Is Essential for Accurate Estimation of Precipitation, and What Are the Recommended Experimental Options for This? It was suggested to build a data set of molecules across the range of physicochemical space to define supersaturation and precipitation performance that could be used in verifying models. It was noted that a number of compounds had been studied during the IMI OrBiTo project and a recent review that summarizes the available human data from intubation for a large number of molecules could be a useful starting point for such a database. Q3: What Are the Options/Best Practices for Characterizing (Or Predicting) Precipitated Material Attributes (Form, Particle Size, and Solubility) for Accurate Input to PBBM? Initially, an attendee in the audience stated that prior to looking into the software capabilities samples should be collected so that the solid state of the precipitate and its particle size can be determined and measured. Though many agreed, based on the responses from industry, this is not a common practice. Some industry representatives reported that precipitated material attributes are nowadays increasingly characterized, but concerns were raised about whether enough precipitated material could be obtained for analysis. However, drugs may precipitate as amorphous forms, which are known to exhibit higher solubility, or as crystalline forms that exhibit lower solubility. An example of gefitinib was discussed and shows that gefitinib precipitates in an amorphous form that converts to a crystalline form. This example underscores the importance of understanding the solid-state characteristics for modeling. Nevertheless, the question remains: What is the best approach (mechanistic or descriptive) given that there is no standard practice? Further discussion centered around redissolution, which can be used to back-calculate particle size. It was stated that this approach is easier than measuring the particle size. A series of experiments conducted with posaconazole were also discussed, as in vitro experiments using the transfer assay showed an aggregate structure that was not crystalline or amorphous. , More specifically, the obtained phase-separated species appeared to be metastable, reaching a plateau above the thermodynamic solubility but below the supersaturated state. The attributes of this phase-separated species could not be further elucidated. This observation challenges the current practice of in vitro to in vivo translation; can we assume from these studies that what happens in vitro translates to in vivo? As in vivo particles do not grow in an isolated medium, they might have attributes different from those of precipitates isolated from in vitro experiments. There was also some discussion about overpredicting precipitation, as ketoconazole precipitates strongly in vitro, but in vivo, it was determined that only about 10% of the dose precipitated. It was again stressed that a curated set of case examples with well understood in vivo behavior would be helpful to define parameters that need to be better characterized in vitro. Q4: What Are the Best Practices for Modeling Precipitation under Physiologically Relevant Luminal Conditions–First Order Fixed Rate Constant/Mechanistic Nucleation and Growth Predictions in Dynamic pH/Fluid Volumes? The first approach brought up was a bottom-up approach, in which the kinetics observed in the in vitro experiment are modeled. Subsequently, the dissolution–precipitation model is integrated in a PBBM framework via IVIVE to simulate the behavior in vivo. This approach was preferred over a top-down approach, where precipitation kinetics are fitted to observed PK data. From a physical and mechanistic modeling perspective, it was considered valuable to separate processes involved in dissolution and precipitation from each other, measure them individually, and then combine all of the individual mechanisms in a model to obtain an improved outcome. A question arose regarding whether anyone has used the emptying half-life in modeling and then investigated variability? Similarly, it was emphasized that physiological variability needs to be accounted for in addition to the variability associated with the pharmaceutical performance of the delivery system in the PBBM. Given the extreme interindividual variability in parameters related to precipitation, population simulations will likely cover the whole range of precipitation constants. Norvir (ritonavir formulated as ASD tablet) was given as an example where interindividual variability should be considered. Additionally, in the case of a precipitation risk, consideration should be given to mitigate this risk through the use of precipitation inhibitors or by using a salt of the drug. The latter option might be an alternative to more complex bioenhancement systems like ASD formulations. One answer referenced tacrolimus (an ASD) in which the precipitation risk was mitigated through formulation; however, it should always come down to an understanding of the biopharmaceutics risk. Q5: How Can Precipitation from Supersaturating Delivery Systems, Such as ASDs, Be Modeled? What Options Are Available to Account for Complex Speciation, Including Liquid–liquid Phase-Separated Nanodroplets? This is particularly challenging and something that requires further work due to the complexities that arise with the presence of polymer and surfactants, for example, which make prediction difficult. Mass transfer models should account for the mixed speciation of the drug. The consensus in the room was that it needs to be guided by the accurate in vitro performance of a supersaturating system. BO Session E - Permeability: From in Vitro Best Practices to in Vivo Relevance This session began with speaker Hans Lennernäs (Uppsala University) and was led by Christer Tannergren (AstraZeneca) and Rodrigo Cristofoletti (University of Florida), with Xiaojun Ren (Novartis) and Eleftheria Tsakalozou (FDA) as scribes. 3.5.1 Presentation 3.5.1.1 Introduction By understanding the permeability of a drug candidate in the GI tract, medicinal chemists and biopharmaceutical scientists are expected to be able to design efficacious and safe drug compounds. These new drug compounds together with improved knowledge of regional intestinal permeability will also allow them to optimize and develop pharmaceutical formulations with high oral bioavailability and less intra- and interindividual variability and to better control of the plasma concentration–time effect relationship. The investigation and optimization of intestinal permeability are among other key factors, such as potency, efficacy, and drug–drug interactions, that are crucial in the drug discovery and development processes of oral pharmaceutical products. Permeability plays a key role in determining the rate and extent of intestinal absorption of a drug. If a drug has poor permeability (BCS class III or IV), it may not be effectively transported into the bloodstream and could have a limited and highly variable therapeutic response. On the other hand, if a drug has high permeability and a poor pH-dependent solubility (BCS class II), the low and erratic rate and extent of absorption may be overcome with a sophisticated and innovative formulation design, such as ASD. This allows for the development of oral products with less variable plasma PK and more effective doses, which can improve patient compliance and overall treatment outcomes. , Determining the intestinal permeability of drug candidates has significantly contributed to reducing the attrition rates of drugs in development. Previously, about 40% of drug candidates were discarded due to poor ADME (absorption, distribution, metabolism, and excretion) properties. However, by focusing on understanding and optimizing permeability, this attrition rate was reduced to around 10%. The limited permeability observed 2–3 decades ago can be attributed to the fact that, during that time, a significant number of drug candidates targeted extracellular sites, and membrane permeation was not considered a crucial aspect of pharmacological discovery efforts. − Recent advancements in drug discovery and medicinal and biological chemistry have expanded the possibilities for developing oral drugs that were previously considered to have unfavorable physicochemical properties. These new modalities, with physicochemical properties beyond the rule of five, have opened up a broader range of options for formulating drugs that can be effectively absorbed across the intestinal barriers. − In addition, considering the permeability along the human GI tract is an essential step in the innovation and development of oral pharmaceutical products featuring new modalities and challenging physicochemical properties. − 3.5.1.2 Intestinal Permeability Models and Approaches Overall, the intestinal barrier is a complex system that plays a crucial role in maintaining a delicate balance between absorption and protection. It acts as a physical and immunological barrier to prevent the invasion of pathogens and the absorption of toxic substances. The small intestine, with its unique architecture and cell composition, is the major site of nutrient and drug absorption in the body. Intestinal mucosa is a dynamic physiological barrier that receives and reacts to neuroendocrine signals to maintain a harmonious interplay between absorptive permeability, protective barrier functions, and secretory functions. Regional differences along the GI tract, such as between the small and large intestine, can have significant implications for pharmaceutical development. It is important to consider these biopharmaceutical and physiological factors in the design of drugs to ensure their optimal delivery, absorption, and effectiveness. The intestinal epithelium, the fastest renewing tissue in human, is made up of multiple cell types with a microenvironment consisting of a dynamic multiparametric and three-dimensional (3D) architecture, making it particularly challenging to recreate in vitro. The intestinal tissue is organized in finger-like protrusions called villi and invaginations called crypts. Intestinal organoids, also known as enteroids, colonoids, or “mini-guts”, are three-dimensional structures derived from stem cells that recapitulate the architecture and function of the intestine. , Furthermore, combined recent advances in cellular biology and microfabrication technologies have led to the development of various bioengineered systems to model and provide more in vivo relevant investigations of the intestinal mucosal physiology and pathophysiology. These microfabricated in vitro models may constitute an alternative to current approaches for screening and biopharmaceutics evaluation, as well as provide insights into fundamental mechanisms governing intestinal homeostasis and pathologies. , It is important to evaluate drug substance solubility, as drugs must be dissolved prior to transport across the intestinal barriers. The mass transfer ( J ) of dissolved drug molecules across semipermeable intestinal barriers is strongly affected by the nature and functions of the intestinal mucosal barrier and especially epithelial barrier. Different transport mechanisms can be involved in the process, and more than one mechanism may be employed for a single drug molecule. The net permeation process for a drug occurs via passive transcellular (lipoidal) and paracellular diffusion and/or carrier-mediated transport in both the absorptive and secretory (efflux) directions to various extents. To accurately determine the permeability of a drug, it is necessary to quantify the concentration of the drug adjacent to the intestinal membrane. This depends on the local distribution model applied in the various permeability models. , A variety of in silico, in vitro, and in vivo permeability models are used in biopharmaceutical studies during all parts of the drug discovery/development process to predict and characterize human drug absorption. − The selected intestinal permeability model will need to reflect the intended use of the permeability estimate at different stages of the drug development process. Permeability models comprise simple simulations and in vitro systems with high-throughput capacity, which are typically used in early drug development to sort compounds. More complex models involving animals, humans, and/or PBBM are employed in the later stages of nonclinical or early clinical drug development. This is particularly crucial when more in vivo relevant predictions are essential for successful translational science and product development. For instance, it is obvious that regional permeability data plays a pivotal role in shaping decisions regarding the choice and design of modified release dosage forms. , , Human fraction dose absorbed (fa) and measured jejunum permeability can be thought of as potential prediction gold standards. − Intestinal catheters have been used for decades in physiology, nutrition, microbiology, PK, and biopharmaceutic research. Studies involving catheters of different lengths and sizes have significantly increased the knowledge regarding the function and regulation of various processes of the human GI tract. The gold-standard permeability values are those that are determined with GI devices after local single dose administration or perfusion of a certain intestinal segment. A review has compiled historical human intestinal P eff values of 80 substances from 61 clinical trials performed in all parts of the human intestinal tract. The investigated substances include drugs, monosaccharaides, amino acids, dipeptides, vitamins, steroids, bile acids, ions, fatty acids, and water. It is well-known that intestinal catheters that are intended to be placed in the more distal small intestine or even proximal colon are challenging to biopharmaceutical researchers and clinicians. − Single-pass perfusion of a certain region of rat intestine (in situ) is the best characterized and most thoroughly validated animal model for investigations of small and large intestinal permeability. A high correlation between human and rat small intestine ( R 2 = 0.8–0.95) was observed for drug intestinal permeability with both carrier-mediated absorption and passive diffusion mechanisms. Moderate correlation between the two species was also found for the expression levels of transporters in the duodenum, which provides evidence of a similarity in the molecular mechanisms of drug absorption. Transport properties (permeability) for different compounds were also highly correlated between rat and human when using rat intestinal specimens in the Ussing chamber model. In contrast, no correlation between rat and human intestine was found for the expression of metabolizing enzymes, which may adequately account for the well-established difference in drug metabolism and oral bioavailability in the two species. − 3.5.1.3 Immediate and Modified Release in the Design of the Oral Dosage Form Design and development of the most appropriate oral dosage form depend on biopharmaceutical properties, terminal half-life (i.e., dosing rate), and plasma exposure effect relationship for the drug. The fraction dose absorbed (fa) needs to be synchronized to intestinal permeability, dissolution rate, and regional intestinal transit for the final design of the dosage form. The small intestine is the major site of nutrient and drug absorption in the body, which is established with a characteristic 3D architecture and cell composition. It is recognized that regional differences exist along the GI tract regarding barrier functions, neuroendocrine processes, and immunological effects, which have a major impact on pharmaceutical development. Interestingly, a larger surface area of the intestinal lining is at a higher risk of being highly exposed by digestive enzymes, potential toxic xenobiotics, and luminal microbiota. Thus, it might be that mammals try to find an optimal balance between protection and service by having a small surface area that prevents extensive uptake and epithelial exposure to luminal content and simultaneously provides a large enough mucosal surface for optimal digestion and nutrient absorption. Quantitative geometrical data of the human GI system vary considerably, especially the surface area enlargement of the intestine due to folds, villi, and microvilli. The inner surface of the small intestine is grossly enlarged by folds, villi, and microvilli, and the large intestine mucosa does not have folds comparable to those of the plicae circularis, except in the rectum. It is claimed that the total surface area of the intestinal mucosa is about the size of a tennis court (260–300 m 2 ) with a reported value of 0.3 m 2 for the large intestine. It has also been claimed that the major part of orally administered drugs are absorbed in the jejunum/ileum, as those account for 99% of the total absorption surface. However, according to Fändriks and Helander in 2014 the small intestine represents about 92–93% of the total intestinal surface area, which leaves some surface area in the large intestine for drug absorption from oral modified release formulations. 3.5.1.4 Intestinal Transport Across Intestinal Barrier The permeation of a dissolved drug molecule across semipermeable biological barriers is dependent on the molecular properties of the drug, transport mechanism(s), drug concentration, and the nature and conditions of the barrier. The transport mechanisms for a drug molecule may include passive lipoidal and paracellular diffusion and/or carrier-mediated (CM) transport in both the absorptive and excretive directions. Recently, the CM transport route has been proposed to be the universal transport mechanism, with no impact from passive lipoidal diffusion. However, Hans Lennernäs indicated that the experimental evidence for this transporter-only theory is weak, and the opposing view that there is a coexistence between CM and passive transport processes is more probable. , CM transporters are primarily important for the absorptive transport of water-soluble nutrients, such as glucose, vitamins, and amino acids, where they enable uptake from, for instance, the intestinal lumen into the bloodstream. However, this transport mechanism might be important for some drug compounds, such as levodopa and valacyclovir, but is in general considered as relatively rare. , An investigational drug having a (net) in vitro efflux ratio (ER) higher than 2 is classified as an efflux transporter substrate, when any pH difference is considered in the applied in vitro model (e.g., Caco-2 cells or transfected cells overexpressing P-gp). , Rhodamine 123, digoxin, vinblastine, paclitaxel, and quinidine are often used as probe substrates for demonstrating the presence of the P-gp transporter. The ER for vinblastine, digoxin, cimetidine, and quinidine were 4.25, 5.41, 1.79, and 5.85, respectively. Despite being classified as an efflux transporter substrate, their fraction dose absorbed is 65% for cimetidine and >80% for the other three drugs. This again demonstrates that drugs with an identified ER higher than 2 need to be investigated in vivo since it has often been shown that there is no or limited in vivo P-gp efflux effect on the extent of absorption. , Paclitaxel has been reported to be a P-gp substrate and in recent in vitro (Caco-2 model) and in vivo PK studies in rats by using the specific P-gp and Breast Cancer Resistance Protein (BCRP) inhibitor encequidar. , Altogether these studies support that P-gp might have a quantitative effect when efflux ratio is extensive. However, the role of an efflux substrate remains unclear in many cases. For instance, a selective estrogen receptor degrader-antagonist was reported to have a high efflux (ER > 30), which was saturable and decreased significantly at concentrations at and above 30 μM (i.e., ER was <15 at concentrations ≥30 μM). The solubility was high in aqueous media (>900 μM), and the candidate had a high fraction absorbed in all species examined (fa ≥ 50–100%). Despite being a drug candidate with a high ER, it had favorable physicochemical properties that resulted in good oral bioavailability in several preclinical species and potent in vivo activity in a mouse xenograft model. The regional differences between the colon and the small intestine regarding the expression of efflux transporters and the tight junction may potentially also affect the rate and extent of colon absorption as well as the prediction performance in this investigation. However, it has previously been concluded that there is no indication that efflux-mediated transport limits colon absorption, which suggests that it is likely the intrinsic passive permeability that is the major determinant of the membrane transport in the colon. , This is further supported by recently established correlations between in vitro permeability and human colon absorption, where the in vitro assays mainly measure the passive drug transport. , Furthermore, as the main source for the estimated permeability in this investigation was the Caco-2 model, which is of colonic origin, it is likely that the well-known effect of narrower tight junctions in the colon was appropriately accounted for in the predictions. 3.5.1.5 Conclusions Regional human intestinal permeability was identified as one important factor for future intestinal permeability determinations in both in vitro and in vivo models. Especially human regional intestinal permeability is of importance for the validation of existing and improved bioengineered in vitro intestinal transport models. Determinations of in vivo colon permeability are of special urgency but are very difficult in humans. Novel GI capsule systems, GI devices with external control, and capsules connected to long GI-tube methodologies are useful in those projects. In vitro intestinal P app -values in the Ussing and 2D cell monolayer models need scaling and adjustment prior to use in PBBM. The choice of permeability model is important for the assessment of the effect of pharmaceutical excipients. Caco-2 cell monolayers have been shown to often overpredict the potential in vivo effects of pharmaceutical excipients, and this higher sensitivity is explained by the given multiple differences between the simple Caco-2 monolayer and human in vivo intestine with its additional features like its mucus layer and full neuroendocrine feedback systems. , − Future intestinal organoids and 3D bioengineered intestinal models might exhibit morphological and physiological features that resemble those of native intestinal mucosa. These more complex in vitro systems are promising but require extensive evaluation and validation prior to use in rational drug discovery and development and for regulatory decision-making. Encequidar and elacridar may be very useful tools to assess the effect of intestinal efflux mediated by P-gp and/or BCRP on the rate and extent of intestinal absorption. Biopharmaceutics has an exciting future with the development of novel GI devices for assessment in humans and animals, bioengineered in vitro systems mimicking in vivo, advanced modeling with molecular dynamic simulation and artificial neural network (ANN) in drug discovery, and extended use of more accurate PBBMs in all part of drug development. Model and knowledge development to predict the effective permeability of new and interesting challenging drug candidates beyond Lipinski’s rule of 5 with a molar mass above 700 and Log D > 5 will be an important part for any future successful drug development. − These novel ANN simulation tools for oral drugs may also be applied before synthesis and even potentially allow for optimization of relevant physicochemical properties of new molecules of interest. , 3.5.2 Discussion The main objective of this part of the session was to discuss best practices for the integration of permeability in PBBM. 3.5.2.1 Q1: What Are the Available Methods to Estimate Jejunal P eff and What Is the Rank Order between the Methods with Regard to Confidence in the P eff Estimation? The majority of the attendees stated that they use MDCK or Caco-2 cell systems to estimate jejunal P eff . PAMPA may be used at early stages of drug development according to the session participants. An in-house calibration curve is normally used for the in vitro to in vivo permeability extrapolation. A few participants used built-in calibration curves from commercially available software, such as GastroPlus or Simcyp. It was stated that, when a calibration curve is used, it should cover low, moderate, and high permeability compounds. To reduce interstudy or interlaboratory variability, a calibrator, or a compound with known in vivo permeability, is often utilized. On rare occasions, QSAR models have been used directly to estimate P eff . Finally, the participants shared that oral solution PK data can be used to optimize P eff . It was anecdotally agreed that the experimentally obtained measurements of P eff from in vitro assays are a measure of passive permeability. When there is a need for characterizing protein-mediated transport, transfected cell lines may be used. While for high passive permeability compounds, the impact of protein-mediated efflux may be limited, it is important to characterize the impact of efflux transporters for low passive permeability compounds, understanding the variability of experimentally obtained V max or K m . For lipophilic compounds or to address food effect, biorelevant media may be used. The value of the in situ permeability in a rat model was discussed in terms of challenges in extrapolation or experimental variability. Most regulators shared that Caco-2 data are most commonly reported in regulatory applications. Canadian and European regulatory agencies indicated that well-controlled in situ data may be accepted. Differences in how passive P eff and transporter kinetics are integrated into various software need to be considered. There was an agreement that the Caco-2 cell model performs well for high permeability compounds. It is important though to cross check across a variety of data sets and P eff measurements collected using different methodologies. 3.5.2.2 Q2: Confidence in P eff Estimation – Low vs High Permeability Compounds? Most participants agreed that there is a high degree of confidence in the estimated P eff for high permeability compounds, while the confidence in the estimated P eff for low to moderate permeability compounds was lower. Although no conclusions were made during the discussion regarding a cutoff value between high and low P eff , a P eff of 1.34 × 10 –4 cm/s, corresponding to the measured human jejunal P eff of metoprolol and a fraction absorbed in humans of 90%, has been used previously for this purpose. Similarly, minoxidil, with an observed human fraction absorbed of 85%, can be applied as a divider between high and low permeability. The group also acknowledged that the extensive interlaboratory variability in the measured in vitro permeability is a factor playing a role in the credibility of the final estimates of the human P eff , especially for low permeability compounds. Therefore, a reference data set for high and low permeability marker compounds established within each lab is beneficial. 3.5.2.3 Q3: How Do We Use in Vitro Permeability Data Generated in Biorelevant Media as Input? Biorelevant media such as FaSSIF and FeSSIF may improve the solubility of some compounds in the apical chamber, but micelle entrapment/binding may bias estimation of apparent permeability ( P app ) across monolayers. For example, Caco-2 P app of lipophilic compounds like danazol is inversely proportional to the concentration of bile salt in the donor chamber, whereas P app of more hydrophilic compounds was insensitive to the bile salts concentration. Careful consideration should be exercised when using P app data obtained in biorelevant media as input since it may represent a mixture of micelle-entrapment and permeability. Measuring free concentration in the donor chamber of the Transwell system or modeling drug-micelle binding and P app simultaneously may be helpful, but further studies are needed to access the benefits of either approach. Finally, when biorelevant media are used, pH in the mucus layer in vivo needs to be taken into consideration. Mucus pH approximates the upper gut pH. Therefore, considering the mucus layer pH and the composition of the lipids in the mucus in vivo versus in vitro may be key to more reliable estimations of P eff . 3.5.2.4 Q4: P app – P eff Correlation vs Fitting P eff to Observed Data–When to Do What? Several methodologies have emerged throughout the years to calculate gut permeability (effective permeability, P eff ) for orally administered drug products. Some of these methodologies, such as the Caco-2 in vitro system, have been initially developed to select candidates or inform “go-no go” decisions based on their permeability characteristics or to assess the need for in vivo testing. It was agreed that novel technologies such as PBBM and experimental data have been leveraged to generate in vivo predictions of the permeability in virtual populations. Accumulating knowledge in the field indicates that for high permeability compounds the Caco-2 in vitro approach appears to be of high confidence. In the absence of data collected in a Caco-2 in vitro system, a mathematical model (such as PBBM) may leverage appropriate clinical PK data sets, e.g., for a nonprecipitating oral solution to derive (estimate) a P eff value. The challenge with this approach is the type of observed data that is utilized for predicting (“fitting”) this parameter, which may include individual or mean PK profile data from an oral solution or any other dosage form for which drug release from the dosage form, and not permeation through the gut epithelium, is the rate limiting step. The use of individual level PK data may result in inflating the intersubject variability incorporated into an in silico model, while the use of an oral dosage form, other than oral solution, may lead to a parameter model identifiability issue. As such, leveraging in vitro permeability data collected in a Caco-2 system toward an initial “bottom-up” approach for P eff is advisable. Confirming the calculated P eff using informative clinical PK data is necessary. In the case where Caco-2 data do not result in satisfactory predictions, it may be acceptable to perform parameter optimization on P eff within the developed PBBM compared with the available clinical PK data. Gut metabolism, particularly relevant for high extraction drugs, was identified as a complicating factor for P eff characterization in the PBBM during the discussion. To handle model identifiability, for PBBM development purposes, applying an in vitro-in vivo extrapolation to inform a “bottom-up” approach in which gut metabolism is mechanistically predicted was suggested. Knowledge on the relative contribution of the gut metabolism toward the overall metabolism (clearance) was identified as critical toward accurately capturing the gut extraction ratio in a PBBM. It is expected that this recommended workflow will perform better for highly permeable compounds compared to low permeability compounds for which additional challenges may need to be addressed. 3.5.2.5 Q5: When Can Permeability Input into PBBM Be Based on Passive Permeability Alone, and When Is There a Need to Account for Uptake/Efflux Transporter Mediated Transport? Inclusion of transporter effects into an in silico model should be data driven. The decision should be based on the experimental results. Nonlinearity in clinical studies could be due to a transporter effect. Further exploration of the extent of the impact may be warranted. A well-controlled modeling and simulation approach may be accepted by regulatory agencies to investigate the impact of a transporter. , A clinical DDI study for transporter inhibition may eventually become warranted. 3.5.2.6 Q6: What Is the Best Practice to Account for Uptake/Efflux Transporter Mediated Transport? When a transporter effect on the clinical outcome for an orally administered drug is suspected, the extent of the transporter involvement on oral absorption and specifically gut permeability should be thoroughly and systematically investigated. Studies using in vitro and animal models have sometimes been used to determine the need for further in vivo studies in humans. The activity of the transporter protein can be characterized across a dose range of the victim drug and in the presence of well-established transporter activity modifiers within the context of in vitro or in vivo studies exploring potential drug–drug interactions and their clinical impact. These types of studies provide reliable estimates for parameters describing the saturable component of the absorption process governed by transporter proteins (Michaelis–Menten kinetics). These parameters include but are not limited to K i (inhibition constant), K I (inhibitor concentration causing half-maximal inactivation), k inact (maximal inactivation rate constant), K m (Michaelis–Menten constant), J max (maximal flux rate), and V max (maximal rate). Depending on the implementation of the saturable absorption process in a mechanistic PBBM, these parameters may serve as model inputs. With the application of validated, for their intended purpose, in vitro-in vivo extrapolations embedded into PBBMs, population predictions in virtual healthy subjects or patients may be generated. The session participants acknowledged the challenge associated with determining appropriate model inputs for the V max parameter, most probably because the in vitro collected V max values are typically highly dependent on the in vitro system utilized for data collection. Additional considerations regarding the regional expression of transporter proteins across the GI tract and the relative expression of these proteins are expected to inform key decisions on the development and validation of PBBMs that incorporate gut transporters. Guidelines and relevant literature are abundantly available for efflux transporters such as P glycoprotein (P-gp) and BCRP. These transporter proteins have been documented to limit bioavailability for orally administered drug substances by pumping them back into the gut lumen after they enter the enterocytes. However, there is a significant knowledge gap regarding uptake gut transporters and their relative contribution to oral absorption, which renders their incorporation into mechanistic in silico models challenging. 3.5.2.7 Q7: What Is the Confidence in Using the Estimated Jejunal P eff to Define the P eff in the Other Compartments? Based on available experimental data, there is low confidence in using the estimated jejunal P eff to define P eff in the other intestinal compartments. The relative values used for P eff in the jejunum versus colon may be extremely important when modeling ER and MR products. For low permeability compounds, jejunal P eff is considered to be higher than P eff in the colon. , This reflects the current general understanding within the community. Commercially available software currently utilizes the same value for P eff in both the jejunum and colon. This value is corrected for effective surface area corresponding to the different gut segments. In the absence of observed data, the group agreed that the correction is necessary but may be an overly simplistic approach. The attendees agreed that it is challenging to understand how the effective surface area in the gut/different regions is estimated and acknowledged that potential “pockets” in the gut are not considered. 3.5.2.8 Q8: How Can Colon P eff Be Estimated? Experimentally, a colon P eff can be obtained with local administration of the compounds of interest using either intubation or telemetric capsule techniques. Indirectly, when utilizing a modeling approach, the group shared that they would vary the P eff value used as the model input to get the clearance of the observed data. This is essentially a method where modeling fitting is involved. Presentation 3.5.1.1 Introduction By understanding the permeability of a drug candidate in the GI tract, medicinal chemists and biopharmaceutical scientists are expected to be able to design efficacious and safe drug compounds. These new drug compounds together with improved knowledge of regional intestinal permeability will also allow them to optimize and develop pharmaceutical formulations with high oral bioavailability and less intra- and interindividual variability and to better control of the plasma concentration–time effect relationship. The investigation and optimization of intestinal permeability are among other key factors, such as potency, efficacy, and drug–drug interactions, that are crucial in the drug discovery and development processes of oral pharmaceutical products. Permeability plays a key role in determining the rate and extent of intestinal absorption of a drug. If a drug has poor permeability (BCS class III or IV), it may not be effectively transported into the bloodstream and could have a limited and highly variable therapeutic response. On the other hand, if a drug has high permeability and a poor pH-dependent solubility (BCS class II), the low and erratic rate and extent of absorption may be overcome with a sophisticated and innovative formulation design, such as ASD. This allows for the development of oral products with less variable plasma PK and more effective doses, which can improve patient compliance and overall treatment outcomes. , Determining the intestinal permeability of drug candidates has significantly contributed to reducing the attrition rates of drugs in development. Previously, about 40% of drug candidates were discarded due to poor ADME (absorption, distribution, metabolism, and excretion) properties. However, by focusing on understanding and optimizing permeability, this attrition rate was reduced to around 10%. The limited permeability observed 2–3 decades ago can be attributed to the fact that, during that time, a significant number of drug candidates targeted extracellular sites, and membrane permeation was not considered a crucial aspect of pharmacological discovery efforts. − Recent advancements in drug discovery and medicinal and biological chemistry have expanded the possibilities for developing oral drugs that were previously considered to have unfavorable physicochemical properties. These new modalities, with physicochemical properties beyond the rule of five, have opened up a broader range of options for formulating drugs that can be effectively absorbed across the intestinal barriers. − In addition, considering the permeability along the human GI tract is an essential step in the innovation and development of oral pharmaceutical products featuring new modalities and challenging physicochemical properties. − 3.5.1.2 Intestinal Permeability Models and Approaches Overall, the intestinal barrier is a complex system that plays a crucial role in maintaining a delicate balance between absorption and protection. It acts as a physical and immunological barrier to prevent the invasion of pathogens and the absorption of toxic substances. The small intestine, with its unique architecture and cell composition, is the major site of nutrient and drug absorption in the body. Intestinal mucosa is a dynamic physiological barrier that receives and reacts to neuroendocrine signals to maintain a harmonious interplay between absorptive permeability, protective barrier functions, and secretory functions. Regional differences along the GI tract, such as between the small and large intestine, can have significant implications for pharmaceutical development. It is important to consider these biopharmaceutical and physiological factors in the design of drugs to ensure their optimal delivery, absorption, and effectiveness. The intestinal epithelium, the fastest renewing tissue in human, is made up of multiple cell types with a microenvironment consisting of a dynamic multiparametric and three-dimensional (3D) architecture, making it particularly challenging to recreate in vitro. The intestinal tissue is organized in finger-like protrusions called villi and invaginations called crypts. Intestinal organoids, also known as enteroids, colonoids, or “mini-guts”, are three-dimensional structures derived from stem cells that recapitulate the architecture and function of the intestine. , Furthermore, combined recent advances in cellular biology and microfabrication technologies have led to the development of various bioengineered systems to model and provide more in vivo relevant investigations of the intestinal mucosal physiology and pathophysiology. These microfabricated in vitro models may constitute an alternative to current approaches for screening and biopharmaceutics evaluation, as well as provide insights into fundamental mechanisms governing intestinal homeostasis and pathologies. , It is important to evaluate drug substance solubility, as drugs must be dissolved prior to transport across the intestinal barriers. The mass transfer ( J ) of dissolved drug molecules across semipermeable intestinal barriers is strongly affected by the nature and functions of the intestinal mucosal barrier and especially epithelial barrier. Different transport mechanisms can be involved in the process, and more than one mechanism may be employed for a single drug molecule. The net permeation process for a drug occurs via passive transcellular (lipoidal) and paracellular diffusion and/or carrier-mediated transport in both the absorptive and secretory (efflux) directions to various extents. To accurately determine the permeability of a drug, it is necessary to quantify the concentration of the drug adjacent to the intestinal membrane. This depends on the local distribution model applied in the various permeability models. , A variety of in silico, in vitro, and in vivo permeability models are used in biopharmaceutical studies during all parts of the drug discovery/development process to predict and characterize human drug absorption. − The selected intestinal permeability model will need to reflect the intended use of the permeability estimate at different stages of the drug development process. Permeability models comprise simple simulations and in vitro systems with high-throughput capacity, which are typically used in early drug development to sort compounds. More complex models involving animals, humans, and/or PBBM are employed in the later stages of nonclinical or early clinical drug development. This is particularly crucial when more in vivo relevant predictions are essential for successful translational science and product development. For instance, it is obvious that regional permeability data plays a pivotal role in shaping decisions regarding the choice and design of modified release dosage forms. , , Human fraction dose absorbed (fa) and measured jejunum permeability can be thought of as potential prediction gold standards. − Intestinal catheters have been used for decades in physiology, nutrition, microbiology, PK, and biopharmaceutic research. Studies involving catheters of different lengths and sizes have significantly increased the knowledge regarding the function and regulation of various processes of the human GI tract. The gold-standard permeability values are those that are determined with GI devices after local single dose administration or perfusion of a certain intestinal segment. A review has compiled historical human intestinal P eff values of 80 substances from 61 clinical trials performed in all parts of the human intestinal tract. The investigated substances include drugs, monosaccharaides, amino acids, dipeptides, vitamins, steroids, bile acids, ions, fatty acids, and water. It is well-known that intestinal catheters that are intended to be placed in the more distal small intestine or even proximal colon are challenging to biopharmaceutical researchers and clinicians. − Single-pass perfusion of a certain region of rat intestine (in situ) is the best characterized and most thoroughly validated animal model for investigations of small and large intestinal permeability. A high correlation between human and rat small intestine ( R 2 = 0.8–0.95) was observed for drug intestinal permeability with both carrier-mediated absorption and passive diffusion mechanisms. Moderate correlation between the two species was also found for the expression levels of transporters in the duodenum, which provides evidence of a similarity in the molecular mechanisms of drug absorption. Transport properties (permeability) for different compounds were also highly correlated between rat and human when using rat intestinal specimens in the Ussing chamber model. In contrast, no correlation between rat and human intestine was found for the expression of metabolizing enzymes, which may adequately account for the well-established difference in drug metabolism and oral bioavailability in the two species. − 3.5.1.3 Immediate and Modified Release in the Design of the Oral Dosage Form Design and development of the most appropriate oral dosage form depend on biopharmaceutical properties, terminal half-life (i.e., dosing rate), and plasma exposure effect relationship for the drug. The fraction dose absorbed (fa) needs to be synchronized to intestinal permeability, dissolution rate, and regional intestinal transit for the final design of the dosage form. The small intestine is the major site of nutrient and drug absorption in the body, which is established with a characteristic 3D architecture and cell composition. It is recognized that regional differences exist along the GI tract regarding barrier functions, neuroendocrine processes, and immunological effects, which have a major impact on pharmaceutical development. Interestingly, a larger surface area of the intestinal lining is at a higher risk of being highly exposed by digestive enzymes, potential toxic xenobiotics, and luminal microbiota. Thus, it might be that mammals try to find an optimal balance between protection and service by having a small surface area that prevents extensive uptake and epithelial exposure to luminal content and simultaneously provides a large enough mucosal surface for optimal digestion and nutrient absorption. Quantitative geometrical data of the human GI system vary considerably, especially the surface area enlargement of the intestine due to folds, villi, and microvilli. The inner surface of the small intestine is grossly enlarged by folds, villi, and microvilli, and the large intestine mucosa does not have folds comparable to those of the plicae circularis, except in the rectum. It is claimed that the total surface area of the intestinal mucosa is about the size of a tennis court (260–300 m 2 ) with a reported value of 0.3 m 2 for the large intestine. It has also been claimed that the major part of orally administered drugs are absorbed in the jejunum/ileum, as those account for 99% of the total absorption surface. However, according to Fändriks and Helander in 2014 the small intestine represents about 92–93% of the total intestinal surface area, which leaves some surface area in the large intestine for drug absorption from oral modified release formulations. 3.5.1.4 Intestinal Transport Across Intestinal Barrier The permeation of a dissolved drug molecule across semipermeable biological barriers is dependent on the molecular properties of the drug, transport mechanism(s), drug concentration, and the nature and conditions of the barrier. The transport mechanisms for a drug molecule may include passive lipoidal and paracellular diffusion and/or carrier-mediated (CM) transport in both the absorptive and excretive directions. Recently, the CM transport route has been proposed to be the universal transport mechanism, with no impact from passive lipoidal diffusion. However, Hans Lennernäs indicated that the experimental evidence for this transporter-only theory is weak, and the opposing view that there is a coexistence between CM and passive transport processes is more probable. , CM transporters are primarily important for the absorptive transport of water-soluble nutrients, such as glucose, vitamins, and amino acids, where they enable uptake from, for instance, the intestinal lumen into the bloodstream. However, this transport mechanism might be important for some drug compounds, such as levodopa and valacyclovir, but is in general considered as relatively rare. , An investigational drug having a (net) in vitro efflux ratio (ER) higher than 2 is classified as an efflux transporter substrate, when any pH difference is considered in the applied in vitro model (e.g., Caco-2 cells or transfected cells overexpressing P-gp). , Rhodamine 123, digoxin, vinblastine, paclitaxel, and quinidine are often used as probe substrates for demonstrating the presence of the P-gp transporter. The ER for vinblastine, digoxin, cimetidine, and quinidine were 4.25, 5.41, 1.79, and 5.85, respectively. Despite being classified as an efflux transporter substrate, their fraction dose absorbed is 65% for cimetidine and >80% for the other three drugs. This again demonstrates that drugs with an identified ER higher than 2 need to be investigated in vivo since it has often been shown that there is no or limited in vivo P-gp efflux effect on the extent of absorption. , Paclitaxel has been reported to be a P-gp substrate and in recent in vitro (Caco-2 model) and in vivo PK studies in rats by using the specific P-gp and Breast Cancer Resistance Protein (BCRP) inhibitor encequidar. , Altogether these studies support that P-gp might have a quantitative effect when efflux ratio is extensive. However, the role of an efflux substrate remains unclear in many cases. For instance, a selective estrogen receptor degrader-antagonist was reported to have a high efflux (ER > 30), which was saturable and decreased significantly at concentrations at and above 30 μM (i.e., ER was <15 at concentrations ≥30 μM). The solubility was high in aqueous media (>900 μM), and the candidate had a high fraction absorbed in all species examined (fa ≥ 50–100%). Despite being a drug candidate with a high ER, it had favorable physicochemical properties that resulted in good oral bioavailability in several preclinical species and potent in vivo activity in a mouse xenograft model. The regional differences between the colon and the small intestine regarding the expression of efflux transporters and the tight junction may potentially also affect the rate and extent of colon absorption as well as the prediction performance in this investigation. However, it has previously been concluded that there is no indication that efflux-mediated transport limits colon absorption, which suggests that it is likely the intrinsic passive permeability that is the major determinant of the membrane transport in the colon. , This is further supported by recently established correlations between in vitro permeability and human colon absorption, where the in vitro assays mainly measure the passive drug transport. , Furthermore, as the main source for the estimated permeability in this investigation was the Caco-2 model, which is of colonic origin, it is likely that the well-known effect of narrower tight junctions in the colon was appropriately accounted for in the predictions. 3.5.1.5 Conclusions Regional human intestinal permeability was identified as one important factor for future intestinal permeability determinations in both in vitro and in vivo models. Especially human regional intestinal permeability is of importance for the validation of existing and improved bioengineered in vitro intestinal transport models. Determinations of in vivo colon permeability are of special urgency but are very difficult in humans. Novel GI capsule systems, GI devices with external control, and capsules connected to long GI-tube methodologies are useful in those projects. In vitro intestinal P app -values in the Ussing and 2D cell monolayer models need scaling and adjustment prior to use in PBBM. The choice of permeability model is important for the assessment of the effect of pharmaceutical excipients. Caco-2 cell monolayers have been shown to often overpredict the potential in vivo effects of pharmaceutical excipients, and this higher sensitivity is explained by the given multiple differences between the simple Caco-2 monolayer and human in vivo intestine with its additional features like its mucus layer and full neuroendocrine feedback systems. , − Future intestinal organoids and 3D bioengineered intestinal models might exhibit morphological and physiological features that resemble those of native intestinal mucosa. These more complex in vitro systems are promising but require extensive evaluation and validation prior to use in rational drug discovery and development and for regulatory decision-making. Encequidar and elacridar may be very useful tools to assess the effect of intestinal efflux mediated by P-gp and/or BCRP on the rate and extent of intestinal absorption. Biopharmaceutics has an exciting future with the development of novel GI devices for assessment in humans and animals, bioengineered in vitro systems mimicking in vivo, advanced modeling with molecular dynamic simulation and artificial neural network (ANN) in drug discovery, and extended use of more accurate PBBMs in all part of drug development. Model and knowledge development to predict the effective permeability of new and interesting challenging drug candidates beyond Lipinski’s rule of 5 with a molar mass above 700 and Log D > 5 will be an important part for any future successful drug development. − These novel ANN simulation tools for oral drugs may also be applied before synthesis and even potentially allow for optimization of relevant physicochemical properties of new molecules of interest. , Introduction By understanding the permeability of a drug candidate in the GI tract, medicinal chemists and biopharmaceutical scientists are expected to be able to design efficacious and safe drug compounds. These new drug compounds together with improved knowledge of regional intestinal permeability will also allow them to optimize and develop pharmaceutical formulations with high oral bioavailability and less intra- and interindividual variability and to better control of the plasma concentration–time effect relationship. The investigation and optimization of intestinal permeability are among other key factors, such as potency, efficacy, and drug–drug interactions, that are crucial in the drug discovery and development processes of oral pharmaceutical products. Permeability plays a key role in determining the rate and extent of intestinal absorption of a drug. If a drug has poor permeability (BCS class III or IV), it may not be effectively transported into the bloodstream and could have a limited and highly variable therapeutic response. On the other hand, if a drug has high permeability and a poor pH-dependent solubility (BCS class II), the low and erratic rate and extent of absorption may be overcome with a sophisticated and innovative formulation design, such as ASD. This allows for the development of oral products with less variable plasma PK and more effective doses, which can improve patient compliance and overall treatment outcomes. , Determining the intestinal permeability of drug candidates has significantly contributed to reducing the attrition rates of drugs in development. Previously, about 40% of drug candidates were discarded due to poor ADME (absorption, distribution, metabolism, and excretion) properties. However, by focusing on understanding and optimizing permeability, this attrition rate was reduced to around 10%. The limited permeability observed 2–3 decades ago can be attributed to the fact that, during that time, a significant number of drug candidates targeted extracellular sites, and membrane permeation was not considered a crucial aspect of pharmacological discovery efforts. − Recent advancements in drug discovery and medicinal and biological chemistry have expanded the possibilities for developing oral drugs that were previously considered to have unfavorable physicochemical properties. These new modalities, with physicochemical properties beyond the rule of five, have opened up a broader range of options for formulating drugs that can be effectively absorbed across the intestinal barriers. − In addition, considering the permeability along the human GI tract is an essential step in the innovation and development of oral pharmaceutical products featuring new modalities and challenging physicochemical properties. − Intestinal Permeability Models and Approaches Overall, the intestinal barrier is a complex system that plays a crucial role in maintaining a delicate balance between absorption and protection. It acts as a physical and immunological barrier to prevent the invasion of pathogens and the absorption of toxic substances. The small intestine, with its unique architecture and cell composition, is the major site of nutrient and drug absorption in the body. Intestinal mucosa is a dynamic physiological barrier that receives and reacts to neuroendocrine signals to maintain a harmonious interplay between absorptive permeability, protective barrier functions, and secretory functions. Regional differences along the GI tract, such as between the small and large intestine, can have significant implications for pharmaceutical development. It is important to consider these biopharmaceutical and physiological factors in the design of drugs to ensure their optimal delivery, absorption, and effectiveness. The intestinal epithelium, the fastest renewing tissue in human, is made up of multiple cell types with a microenvironment consisting of a dynamic multiparametric and three-dimensional (3D) architecture, making it particularly challenging to recreate in vitro. The intestinal tissue is organized in finger-like protrusions called villi and invaginations called crypts. Intestinal organoids, also known as enteroids, colonoids, or “mini-guts”, are three-dimensional structures derived from stem cells that recapitulate the architecture and function of the intestine. , Furthermore, combined recent advances in cellular biology and microfabrication technologies have led to the development of various bioengineered systems to model and provide more in vivo relevant investigations of the intestinal mucosal physiology and pathophysiology. These microfabricated in vitro models may constitute an alternative to current approaches for screening and biopharmaceutics evaluation, as well as provide insights into fundamental mechanisms governing intestinal homeostasis and pathologies. , It is important to evaluate drug substance solubility, as drugs must be dissolved prior to transport across the intestinal barriers. The mass transfer ( J ) of dissolved drug molecules across semipermeable intestinal barriers is strongly affected by the nature and functions of the intestinal mucosal barrier and especially epithelial barrier. Different transport mechanisms can be involved in the process, and more than one mechanism may be employed for a single drug molecule. The net permeation process for a drug occurs via passive transcellular (lipoidal) and paracellular diffusion and/or carrier-mediated transport in both the absorptive and secretory (efflux) directions to various extents. To accurately determine the permeability of a drug, it is necessary to quantify the concentration of the drug adjacent to the intestinal membrane. This depends on the local distribution model applied in the various permeability models. , A variety of in silico, in vitro, and in vivo permeability models are used in biopharmaceutical studies during all parts of the drug discovery/development process to predict and characterize human drug absorption. − The selected intestinal permeability model will need to reflect the intended use of the permeability estimate at different stages of the drug development process. Permeability models comprise simple simulations and in vitro systems with high-throughput capacity, which are typically used in early drug development to sort compounds. More complex models involving animals, humans, and/or PBBM are employed in the later stages of nonclinical or early clinical drug development. This is particularly crucial when more in vivo relevant predictions are essential for successful translational science and product development. For instance, it is obvious that regional permeability data plays a pivotal role in shaping decisions regarding the choice and design of modified release dosage forms. , , Human fraction dose absorbed (fa) and measured jejunum permeability can be thought of as potential prediction gold standards. − Intestinal catheters have been used for decades in physiology, nutrition, microbiology, PK, and biopharmaceutic research. Studies involving catheters of different lengths and sizes have significantly increased the knowledge regarding the function and regulation of various processes of the human GI tract. The gold-standard permeability values are those that are determined with GI devices after local single dose administration or perfusion of a certain intestinal segment. A review has compiled historical human intestinal P eff values of 80 substances from 61 clinical trials performed in all parts of the human intestinal tract. The investigated substances include drugs, monosaccharaides, amino acids, dipeptides, vitamins, steroids, bile acids, ions, fatty acids, and water. It is well-known that intestinal catheters that are intended to be placed in the more distal small intestine or even proximal colon are challenging to biopharmaceutical researchers and clinicians. − Single-pass perfusion of a certain region of rat intestine (in situ) is the best characterized and most thoroughly validated animal model for investigations of small and large intestinal permeability. A high correlation between human and rat small intestine ( R 2 = 0.8–0.95) was observed for drug intestinal permeability with both carrier-mediated absorption and passive diffusion mechanisms. Moderate correlation between the two species was also found for the expression levels of transporters in the duodenum, which provides evidence of a similarity in the molecular mechanisms of drug absorption. Transport properties (permeability) for different compounds were also highly correlated between rat and human when using rat intestinal specimens in the Ussing chamber model. In contrast, no correlation between rat and human intestine was found for the expression of metabolizing enzymes, which may adequately account for the well-established difference in drug metabolism and oral bioavailability in the two species. − Immediate and Modified Release in the Design of the Oral Dosage Form Design and development of the most appropriate oral dosage form depend on biopharmaceutical properties, terminal half-life (i.e., dosing rate), and plasma exposure effect relationship for the drug. The fraction dose absorbed (fa) needs to be synchronized to intestinal permeability, dissolution rate, and regional intestinal transit for the final design of the dosage form. The small intestine is the major site of nutrient and drug absorption in the body, which is established with a characteristic 3D architecture and cell composition. It is recognized that regional differences exist along the GI tract regarding barrier functions, neuroendocrine processes, and immunological effects, which have a major impact on pharmaceutical development. Interestingly, a larger surface area of the intestinal lining is at a higher risk of being highly exposed by digestive enzymes, potential toxic xenobiotics, and luminal microbiota. Thus, it might be that mammals try to find an optimal balance between protection and service by having a small surface area that prevents extensive uptake and epithelial exposure to luminal content and simultaneously provides a large enough mucosal surface for optimal digestion and nutrient absorption. Quantitative geometrical data of the human GI system vary considerably, especially the surface area enlargement of the intestine due to folds, villi, and microvilli. The inner surface of the small intestine is grossly enlarged by folds, villi, and microvilli, and the large intestine mucosa does not have folds comparable to those of the plicae circularis, except in the rectum. It is claimed that the total surface area of the intestinal mucosa is about the size of a tennis court (260–300 m 2 ) with a reported value of 0.3 m 2 for the large intestine. It has also been claimed that the major part of orally administered drugs are absorbed in the jejunum/ileum, as those account for 99% of the total absorption surface. However, according to Fändriks and Helander in 2014 the small intestine represents about 92–93% of the total intestinal surface area, which leaves some surface area in the large intestine for drug absorption from oral modified release formulations. Intestinal Transport Across Intestinal Barrier The permeation of a dissolved drug molecule across semipermeable biological barriers is dependent on the molecular properties of the drug, transport mechanism(s), drug concentration, and the nature and conditions of the barrier. The transport mechanisms for a drug molecule may include passive lipoidal and paracellular diffusion and/or carrier-mediated (CM) transport in both the absorptive and excretive directions. Recently, the CM transport route has been proposed to be the universal transport mechanism, with no impact from passive lipoidal diffusion. However, Hans Lennernäs indicated that the experimental evidence for this transporter-only theory is weak, and the opposing view that there is a coexistence between CM and passive transport processes is more probable. , CM transporters are primarily important for the absorptive transport of water-soluble nutrients, such as glucose, vitamins, and amino acids, where they enable uptake from, for instance, the intestinal lumen into the bloodstream. However, this transport mechanism might be important for some drug compounds, such as levodopa and valacyclovir, but is in general considered as relatively rare. , An investigational drug having a (net) in vitro efflux ratio (ER) higher than 2 is classified as an efflux transporter substrate, when any pH difference is considered in the applied in vitro model (e.g., Caco-2 cells or transfected cells overexpressing P-gp). , Rhodamine 123, digoxin, vinblastine, paclitaxel, and quinidine are often used as probe substrates for demonstrating the presence of the P-gp transporter. The ER for vinblastine, digoxin, cimetidine, and quinidine were 4.25, 5.41, 1.79, and 5.85, respectively. Despite being classified as an efflux transporter substrate, their fraction dose absorbed is 65% for cimetidine and >80% for the other three drugs. This again demonstrates that drugs with an identified ER higher than 2 need to be investigated in vivo since it has often been shown that there is no or limited in vivo P-gp efflux effect on the extent of absorption. , Paclitaxel has been reported to be a P-gp substrate and in recent in vitro (Caco-2 model) and in vivo PK studies in rats by using the specific P-gp and Breast Cancer Resistance Protein (BCRP) inhibitor encequidar. , Altogether these studies support that P-gp might have a quantitative effect when efflux ratio is extensive. However, the role of an efflux substrate remains unclear in many cases. For instance, a selective estrogen receptor degrader-antagonist was reported to have a high efflux (ER > 30), which was saturable and decreased significantly at concentrations at and above 30 μM (i.e., ER was <15 at concentrations ≥30 μM). The solubility was high in aqueous media (>900 μM), and the candidate had a high fraction absorbed in all species examined (fa ≥ 50–100%). Despite being a drug candidate with a high ER, it had favorable physicochemical properties that resulted in good oral bioavailability in several preclinical species and potent in vivo activity in a mouse xenograft model. The regional differences between the colon and the small intestine regarding the expression of efflux transporters and the tight junction may potentially also affect the rate and extent of colon absorption as well as the prediction performance in this investigation. However, it has previously been concluded that there is no indication that efflux-mediated transport limits colon absorption, which suggests that it is likely the intrinsic passive permeability that is the major determinant of the membrane transport in the colon. , This is further supported by recently established correlations between in vitro permeability and human colon absorption, where the in vitro assays mainly measure the passive drug transport. , Furthermore, as the main source for the estimated permeability in this investigation was the Caco-2 model, which is of colonic origin, it is likely that the well-known effect of narrower tight junctions in the colon was appropriately accounted for in the predictions. Conclusions Regional human intestinal permeability was identified as one important factor for future intestinal permeability determinations in both in vitro and in vivo models. Especially human regional intestinal permeability is of importance for the validation of existing and improved bioengineered in vitro intestinal transport models. Determinations of in vivo colon permeability are of special urgency but are very difficult in humans. Novel GI capsule systems, GI devices with external control, and capsules connected to long GI-tube methodologies are useful in those projects. In vitro intestinal P app -values in the Ussing and 2D cell monolayer models need scaling and adjustment prior to use in PBBM. The choice of permeability model is important for the assessment of the effect of pharmaceutical excipients. Caco-2 cell monolayers have been shown to often overpredict the potential in vivo effects of pharmaceutical excipients, and this higher sensitivity is explained by the given multiple differences between the simple Caco-2 monolayer and human in vivo intestine with its additional features like its mucus layer and full neuroendocrine feedback systems. , − Future intestinal organoids and 3D bioengineered intestinal models might exhibit morphological and physiological features that resemble those of native intestinal mucosa. These more complex in vitro systems are promising but require extensive evaluation and validation prior to use in rational drug discovery and development and for regulatory decision-making. Encequidar and elacridar may be very useful tools to assess the effect of intestinal efflux mediated by P-gp and/or BCRP on the rate and extent of intestinal absorption. Biopharmaceutics has an exciting future with the development of novel GI devices for assessment in humans and animals, bioengineered in vitro systems mimicking in vivo, advanced modeling with molecular dynamic simulation and artificial neural network (ANN) in drug discovery, and extended use of more accurate PBBMs in all part of drug development. Model and knowledge development to predict the effective permeability of new and interesting challenging drug candidates beyond Lipinski’s rule of 5 with a molar mass above 700 and Log D > 5 will be an important part for any future successful drug development. − These novel ANN simulation tools for oral drugs may also be applied before synthesis and even potentially allow for optimization of relevant physicochemical properties of new molecules of interest. , Discussion The main objective of this part of the session was to discuss best practices for the integration of permeability in PBBM. 3.5.2.1 Q1: What Are the Available Methods to Estimate Jejunal P eff and What Is the Rank Order between the Methods with Regard to Confidence in the P eff Estimation? The majority of the attendees stated that they use MDCK or Caco-2 cell systems to estimate jejunal P eff . PAMPA may be used at early stages of drug development according to the session participants. An in-house calibration curve is normally used for the in vitro to in vivo permeability extrapolation. A few participants used built-in calibration curves from commercially available software, such as GastroPlus or Simcyp. It was stated that, when a calibration curve is used, it should cover low, moderate, and high permeability compounds. To reduce interstudy or interlaboratory variability, a calibrator, or a compound with known in vivo permeability, is often utilized. On rare occasions, QSAR models have been used directly to estimate P eff . Finally, the participants shared that oral solution PK data can be used to optimize P eff . It was anecdotally agreed that the experimentally obtained measurements of P eff from in vitro assays are a measure of passive permeability. When there is a need for characterizing protein-mediated transport, transfected cell lines may be used. While for high passive permeability compounds, the impact of protein-mediated efflux may be limited, it is important to characterize the impact of efflux transporters for low passive permeability compounds, understanding the variability of experimentally obtained V max or K m . For lipophilic compounds or to address food effect, biorelevant media may be used. The value of the in situ permeability in a rat model was discussed in terms of challenges in extrapolation or experimental variability. Most regulators shared that Caco-2 data are most commonly reported in regulatory applications. Canadian and European regulatory agencies indicated that well-controlled in situ data may be accepted. Differences in how passive P eff and transporter kinetics are integrated into various software need to be considered. There was an agreement that the Caco-2 cell model performs well for high permeability compounds. It is important though to cross check across a variety of data sets and P eff measurements collected using different methodologies. 3.5.2.2 Q2: Confidence in P eff Estimation – Low vs High Permeability Compounds? Most participants agreed that there is a high degree of confidence in the estimated P eff for high permeability compounds, while the confidence in the estimated P eff for low to moderate permeability compounds was lower. Although no conclusions were made during the discussion regarding a cutoff value between high and low P eff , a P eff of 1.34 × 10 –4 cm/s, corresponding to the measured human jejunal P eff of metoprolol and a fraction absorbed in humans of 90%, has been used previously for this purpose. Similarly, minoxidil, with an observed human fraction absorbed of 85%, can be applied as a divider between high and low permeability. The group also acknowledged that the extensive interlaboratory variability in the measured in vitro permeability is a factor playing a role in the credibility of the final estimates of the human P eff , especially for low permeability compounds. Therefore, a reference data set for high and low permeability marker compounds established within each lab is beneficial. 3.5.2.3 Q3: How Do We Use in Vitro Permeability Data Generated in Biorelevant Media as Input? Biorelevant media such as FaSSIF and FeSSIF may improve the solubility of some compounds in the apical chamber, but micelle entrapment/binding may bias estimation of apparent permeability ( P app ) across monolayers. For example, Caco-2 P app of lipophilic compounds like danazol is inversely proportional to the concentration of bile salt in the donor chamber, whereas P app of more hydrophilic compounds was insensitive to the bile salts concentration. Careful consideration should be exercised when using P app data obtained in biorelevant media as input since it may represent a mixture of micelle-entrapment and permeability. Measuring free concentration in the donor chamber of the Transwell system or modeling drug-micelle binding and P app simultaneously may be helpful, but further studies are needed to access the benefits of either approach. Finally, when biorelevant media are used, pH in the mucus layer in vivo needs to be taken into consideration. Mucus pH approximates the upper gut pH. Therefore, considering the mucus layer pH and the composition of the lipids in the mucus in vivo versus in vitro may be key to more reliable estimations of P eff . 3.5.2.4 Q4: P app – P eff Correlation vs Fitting P eff to Observed Data–When to Do What? Several methodologies have emerged throughout the years to calculate gut permeability (effective permeability, P eff ) for orally administered drug products. Some of these methodologies, such as the Caco-2 in vitro system, have been initially developed to select candidates or inform “go-no go” decisions based on their permeability characteristics or to assess the need for in vivo testing. It was agreed that novel technologies such as PBBM and experimental data have been leveraged to generate in vivo predictions of the permeability in virtual populations. Accumulating knowledge in the field indicates that for high permeability compounds the Caco-2 in vitro approach appears to be of high confidence. In the absence of data collected in a Caco-2 in vitro system, a mathematical model (such as PBBM) may leverage appropriate clinical PK data sets, e.g., for a nonprecipitating oral solution to derive (estimate) a P eff value. The challenge with this approach is the type of observed data that is utilized for predicting (“fitting”) this parameter, which may include individual or mean PK profile data from an oral solution or any other dosage form for which drug release from the dosage form, and not permeation through the gut epithelium, is the rate limiting step. The use of individual level PK data may result in inflating the intersubject variability incorporated into an in silico model, while the use of an oral dosage form, other than oral solution, may lead to a parameter model identifiability issue. As such, leveraging in vitro permeability data collected in a Caco-2 system toward an initial “bottom-up” approach for P eff is advisable. Confirming the calculated P eff using informative clinical PK data is necessary. In the case where Caco-2 data do not result in satisfactory predictions, it may be acceptable to perform parameter optimization on P eff within the developed PBBM compared with the available clinical PK data. Gut metabolism, particularly relevant for high extraction drugs, was identified as a complicating factor for P eff characterization in the PBBM during the discussion. To handle model identifiability, for PBBM development purposes, applying an in vitro-in vivo extrapolation to inform a “bottom-up” approach in which gut metabolism is mechanistically predicted was suggested. Knowledge on the relative contribution of the gut metabolism toward the overall metabolism (clearance) was identified as critical toward accurately capturing the gut extraction ratio in a PBBM. It is expected that this recommended workflow will perform better for highly permeable compounds compared to low permeability compounds for which additional challenges may need to be addressed. 3.5.2.5 Q5: When Can Permeability Input into PBBM Be Based on Passive Permeability Alone, and When Is There a Need to Account for Uptake/Efflux Transporter Mediated Transport? Inclusion of transporter effects into an in silico model should be data driven. The decision should be based on the experimental results. Nonlinearity in clinical studies could be due to a transporter effect. Further exploration of the extent of the impact may be warranted. A well-controlled modeling and simulation approach may be accepted by regulatory agencies to investigate the impact of a transporter. , A clinical DDI study for transporter inhibition may eventually become warranted. 3.5.2.6 Q6: What Is the Best Practice to Account for Uptake/Efflux Transporter Mediated Transport? When a transporter effect on the clinical outcome for an orally administered drug is suspected, the extent of the transporter involvement on oral absorption and specifically gut permeability should be thoroughly and systematically investigated. Studies using in vitro and animal models have sometimes been used to determine the need for further in vivo studies in humans. The activity of the transporter protein can be characterized across a dose range of the victim drug and in the presence of well-established transporter activity modifiers within the context of in vitro or in vivo studies exploring potential drug–drug interactions and their clinical impact. These types of studies provide reliable estimates for parameters describing the saturable component of the absorption process governed by transporter proteins (Michaelis–Menten kinetics). These parameters include but are not limited to K i (inhibition constant), K I (inhibitor concentration causing half-maximal inactivation), k inact (maximal inactivation rate constant), K m (Michaelis–Menten constant), J max (maximal flux rate), and V max (maximal rate). Depending on the implementation of the saturable absorption process in a mechanistic PBBM, these parameters may serve as model inputs. With the application of validated, for their intended purpose, in vitro-in vivo extrapolations embedded into PBBMs, population predictions in virtual healthy subjects or patients may be generated. The session participants acknowledged the challenge associated with determining appropriate model inputs for the V max parameter, most probably because the in vitro collected V max values are typically highly dependent on the in vitro system utilized for data collection. Additional considerations regarding the regional expression of transporter proteins across the GI tract and the relative expression of these proteins are expected to inform key decisions on the development and validation of PBBMs that incorporate gut transporters. Guidelines and relevant literature are abundantly available for efflux transporters such as P glycoprotein (P-gp) and BCRP. These transporter proteins have been documented to limit bioavailability for orally administered drug substances by pumping them back into the gut lumen after they enter the enterocytes. However, there is a significant knowledge gap regarding uptake gut transporters and their relative contribution to oral absorption, which renders their incorporation into mechanistic in silico models challenging. 3.5.2.7 Q7: What Is the Confidence in Using the Estimated Jejunal P eff to Define the P eff in the Other Compartments? Based on available experimental data, there is low confidence in using the estimated jejunal P eff to define P eff in the other intestinal compartments. The relative values used for P eff in the jejunum versus colon may be extremely important when modeling ER and MR products. For low permeability compounds, jejunal P eff is considered to be higher than P eff in the colon. , This reflects the current general understanding within the community. Commercially available software currently utilizes the same value for P eff in both the jejunum and colon. This value is corrected for effective surface area corresponding to the different gut segments. In the absence of observed data, the group agreed that the correction is necessary but may be an overly simplistic approach. The attendees agreed that it is challenging to understand how the effective surface area in the gut/different regions is estimated and acknowledged that potential “pockets” in the gut are not considered. 3.5.2.8 Q8: How Can Colon P eff Be Estimated? Experimentally, a colon P eff can be obtained with local administration of the compounds of interest using either intubation or telemetric capsule techniques. Indirectly, when utilizing a modeling approach, the group shared that they would vary the P eff value used as the model input to get the clearance of the observed data. This is essentially a method where modeling fitting is involved. Q1: What Are the Available Methods to Estimate Jejunal P eff and What Is the Rank Order between the Methods with Regard to Confidence in the P eff Estimation? The majority of the attendees stated that they use MDCK or Caco-2 cell systems to estimate jejunal P eff . PAMPA may be used at early stages of drug development according to the session participants. An in-house calibration curve is normally used for the in vitro to in vivo permeability extrapolation. A few participants used built-in calibration curves from commercially available software, such as GastroPlus or Simcyp. It was stated that, when a calibration curve is used, it should cover low, moderate, and high permeability compounds. To reduce interstudy or interlaboratory variability, a calibrator, or a compound with known in vivo permeability, is often utilized. On rare occasions, QSAR models have been used directly to estimate P eff . Finally, the participants shared that oral solution PK data can be used to optimize P eff . It was anecdotally agreed that the experimentally obtained measurements of P eff from in vitro assays are a measure of passive permeability. When there is a need for characterizing protein-mediated transport, transfected cell lines may be used. While for high passive permeability compounds, the impact of protein-mediated efflux may be limited, it is important to characterize the impact of efflux transporters for low passive permeability compounds, understanding the variability of experimentally obtained V max or K m . For lipophilic compounds or to address food effect, biorelevant media may be used. The value of the in situ permeability in a rat model was discussed in terms of challenges in extrapolation or experimental variability. Most regulators shared that Caco-2 data are most commonly reported in regulatory applications. Canadian and European regulatory agencies indicated that well-controlled in situ data may be accepted. Differences in how passive P eff and transporter kinetics are integrated into various software need to be considered. There was an agreement that the Caco-2 cell model performs well for high permeability compounds. It is important though to cross check across a variety of data sets and P eff measurements collected using different methodologies. Q2: Confidence in P eff Estimation – Low vs High Permeability Compounds? Most participants agreed that there is a high degree of confidence in the estimated P eff for high permeability compounds, while the confidence in the estimated P eff for low to moderate permeability compounds was lower. Although no conclusions were made during the discussion regarding a cutoff value between high and low P eff , a P eff of 1.34 × 10 –4 cm/s, corresponding to the measured human jejunal P eff of metoprolol and a fraction absorbed in humans of 90%, has been used previously for this purpose. Similarly, minoxidil, with an observed human fraction absorbed of 85%, can be applied as a divider between high and low permeability. The group also acknowledged that the extensive interlaboratory variability in the measured in vitro permeability is a factor playing a role in the credibility of the final estimates of the human P eff , especially for low permeability compounds. Therefore, a reference data set for high and low permeability marker compounds established within each lab is beneficial. Q3: How Do We Use in Vitro Permeability Data Generated in Biorelevant Media as Input? Biorelevant media such as FaSSIF and FeSSIF may improve the solubility of some compounds in the apical chamber, but micelle entrapment/binding may bias estimation of apparent permeability ( P app ) across monolayers. For example, Caco-2 P app of lipophilic compounds like danazol is inversely proportional to the concentration of bile salt in the donor chamber, whereas P app of more hydrophilic compounds was insensitive to the bile salts concentration. Careful consideration should be exercised when using P app data obtained in biorelevant media as input since it may represent a mixture of micelle-entrapment and permeability. Measuring free concentration in the donor chamber of the Transwell system or modeling drug-micelle binding and P app simultaneously may be helpful, but further studies are needed to access the benefits of either approach. Finally, when biorelevant media are used, pH in the mucus layer in vivo needs to be taken into consideration. Mucus pH approximates the upper gut pH. Therefore, considering the mucus layer pH and the composition of the lipids in the mucus in vivo versus in vitro may be key to more reliable estimations of P eff . Q4: P app – P eff Correlation vs Fitting P eff to Observed Data–When to Do What? Several methodologies have emerged throughout the years to calculate gut permeability (effective permeability, P eff ) for orally administered drug products. Some of these methodologies, such as the Caco-2 in vitro system, have been initially developed to select candidates or inform “go-no go” decisions based on their permeability characteristics or to assess the need for in vivo testing. It was agreed that novel technologies such as PBBM and experimental data have been leveraged to generate in vivo predictions of the permeability in virtual populations. Accumulating knowledge in the field indicates that for high permeability compounds the Caco-2 in vitro approach appears to be of high confidence. In the absence of data collected in a Caco-2 in vitro system, a mathematical model (such as PBBM) may leverage appropriate clinical PK data sets, e.g., for a nonprecipitating oral solution to derive (estimate) a P eff value. The challenge with this approach is the type of observed data that is utilized for predicting (“fitting”) this parameter, which may include individual or mean PK profile data from an oral solution or any other dosage form for which drug release from the dosage form, and not permeation through the gut epithelium, is the rate limiting step. The use of individual level PK data may result in inflating the intersubject variability incorporated into an in silico model, while the use of an oral dosage form, other than oral solution, may lead to a parameter model identifiability issue. As such, leveraging in vitro permeability data collected in a Caco-2 system toward an initial “bottom-up” approach for P eff is advisable. Confirming the calculated P eff using informative clinical PK data is necessary. In the case where Caco-2 data do not result in satisfactory predictions, it may be acceptable to perform parameter optimization on P eff within the developed PBBM compared with the available clinical PK data. Gut metabolism, particularly relevant for high extraction drugs, was identified as a complicating factor for P eff characterization in the PBBM during the discussion. To handle model identifiability, for PBBM development purposes, applying an in vitro-in vivo extrapolation to inform a “bottom-up” approach in which gut metabolism is mechanistically predicted was suggested. Knowledge on the relative contribution of the gut metabolism toward the overall metabolism (clearance) was identified as critical toward accurately capturing the gut extraction ratio in a PBBM. It is expected that this recommended workflow will perform better for highly permeable compounds compared to low permeability compounds for which additional challenges may need to be addressed. Q5: When Can Permeability Input into PBBM Be Based on Passive Permeability Alone, and When Is There a Need to Account for Uptake/Efflux Transporter Mediated Transport? Inclusion of transporter effects into an in silico model should be data driven. The decision should be based on the experimental results. Nonlinearity in clinical studies could be due to a transporter effect. Further exploration of the extent of the impact may be warranted. A well-controlled modeling and simulation approach may be accepted by regulatory agencies to investigate the impact of a transporter. , A clinical DDI study for transporter inhibition may eventually become warranted. Q6: What Is the Best Practice to Account for Uptake/Efflux Transporter Mediated Transport? When a transporter effect on the clinical outcome for an orally administered drug is suspected, the extent of the transporter involvement on oral absorption and specifically gut permeability should be thoroughly and systematically investigated. Studies using in vitro and animal models have sometimes been used to determine the need for further in vivo studies in humans. The activity of the transporter protein can be characterized across a dose range of the victim drug and in the presence of well-established transporter activity modifiers within the context of in vitro or in vivo studies exploring potential drug–drug interactions and their clinical impact. These types of studies provide reliable estimates for parameters describing the saturable component of the absorption process governed by transporter proteins (Michaelis–Menten kinetics). These parameters include but are not limited to K i (inhibition constant), K I (inhibitor concentration causing half-maximal inactivation), k inact (maximal inactivation rate constant), K m (Michaelis–Menten constant), J max (maximal flux rate), and V max (maximal rate). Depending on the implementation of the saturable absorption process in a mechanistic PBBM, these parameters may serve as model inputs. With the application of validated, for their intended purpose, in vitro-in vivo extrapolations embedded into PBBMs, population predictions in virtual healthy subjects or patients may be generated. The session participants acknowledged the challenge associated with determining appropriate model inputs for the V max parameter, most probably because the in vitro collected V max values are typically highly dependent on the in vitro system utilized for data collection. Additional considerations regarding the regional expression of transporter proteins across the GI tract and the relative expression of these proteins are expected to inform key decisions on the development and validation of PBBMs that incorporate gut transporters. Guidelines and relevant literature are abundantly available for efflux transporters such as P glycoprotein (P-gp) and BCRP. These transporter proteins have been documented to limit bioavailability for orally administered drug substances by pumping them back into the gut lumen after they enter the enterocytes. However, there is a significant knowledge gap regarding uptake gut transporters and their relative contribution to oral absorption, which renders their incorporation into mechanistic in silico models challenging. Q7: What Is the Confidence in Using the Estimated Jejunal P eff to Define the P eff in the Other Compartments? Based on available experimental data, there is low confidence in using the estimated jejunal P eff to define P eff in the other intestinal compartments. The relative values used for P eff in the jejunum versus colon may be extremely important when modeling ER and MR products. For low permeability compounds, jejunal P eff is considered to be higher than P eff in the colon. , This reflects the current general understanding within the community. Commercially available software currently utilizes the same value for P eff in both the jejunum and colon. This value is corrected for effective surface area corresponding to the different gut segments. In the absence of observed data, the group agreed that the correction is necessary but may be an overly simplistic approach. The attendees agreed that it is challenging to understand how the effective surface area in the gut/different regions is estimated and acknowledged that potential “pockets” in the gut are not considered. Q8: How Can Colon P eff Be Estimated? Experimentally, a colon P eff can be obtained with local administration of the compounds of interest using either intubation or telemetric capsule techniques. Indirectly, when utilizing a modeling approach, the group shared that they would vary the P eff value used as the model input to get the clearance of the observed data. This is essentially a method where modeling fitting is involved. Conclusions and Future Directions This workshop represented the culmination of a year-long collaborative effort between industry and regulatory authorities and was overall successful in its effort to advance PBBM for oral products. The morning session focused on the regulatory agency discussion of PBBM case studies submitted by industry members of the IQ consortium and provided insights into the regulatory assessment process and some clarity regarding what is looked-for in PBBM regulatory submissions. The afternoon sessions discussed best practices, decision trees, and checklists, which will be useful for future submissions of PBBM regarding the measurement of key input parameters such as drug solubility, drug product dissolution, precipitation, and permeability. In addition, breakout sessions also discussed best practices around how these measurements should be undertaken for various drug and formulation types and how the measured values should be modeled mechanistically and integrated in the PBBMs. There are remaining gaps in our knowledge and limitations to directly translate in vitro data to predict in vivo drug substance or drug product performance, and the breakout session discussions also covered these gaps and proposed, where relevant and feasible, practical approaches to cover these gaps based on preclinical or clinical measured data. Overall, sound model parametrization and explanation thereof are key to the success of PBBM and their acceptance by regulatory agencies, and the requirements for measurements and integration of these parameters should be shared across the scientific community. The decision trees, checklists, and subject matter expert advice presented in this paper and Supporting Information can be understood as practical tools to foster scientific discussion and to continue efforts toward harmonization on best practices for PBBM. |
Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities | a6440eb2-c75a-4375-ad14-20e0047fc54f | 9903010 | Health Communication[mh] | Black and Latino Americans are disproportionately affected by COVID-19, with nearly 3 times the risk of hospitalization and at least twice the risk of death compared with Whites, and inequalities in health communication during this public health emergency may reinforce existing disparities. Lessons learned from campaign planning and implementation can inform future public health initiatives, including selecting the appropriate marketing mix to facilitate awareness, and collaborating with community partners and local health departments to ensure successful program execution. Overall, we observed that no one marketing tool was the most effective in increasing awareness and test kit orders/pickup, and that different channels helped reach different subpopulations; we also found that demand for test kits outlasted the SYCT campaign duration, suggesting that health departments, community organizations, and policymakers should look for ways to provide free test kits outside of a particular campaign window. Throughout the COVID-19 pandemic, health organizations and governments around the world have relied in part on health communicators to promote preventive behaviors to reduce the spread of severe acute respiratory syndrome—coronavirus 2 (SARS-CoV-2). Surveillance testing of populations is a countermeasure that has been used by schools, workplaces, and athletic teams to aid in diagnosing asymptomatic cases of COVID-19, which are known to be infectious and may account for nearly half of all COVID-19 cases. Rapid identification of SARS-CoV-2 infection can lead to faster index case identification and isolation and can help prevent community transmission. With this goal, surveillance testing has been piloted on a wider scale across entire communities. - Black and Latino Americans are disproportionately affected by COVID-19, with nearly 3 times the risk of hospitalization and at least twice the risk of death compared with Whites. Inequalities in health communication during this public health emergency may reinforce existing disparities. We report on the development, execution, and evaluation of a health communication campaign supporting the first community-wide at-home SARS-CoV-2 testing program in the United States. Intervention and Aims Say Yes! COVID Test (SYCT) is a public health initiative supported by local health departments and government, community, and academic groups. The goal of SYCT was to determine whether frequent at-home rapid antigen testing for SARS-CoV-2 infection could decrease community spread of the virus by triggering early isolation and other precautions for infected individuals, including those who were asymptomatic. The health communication campaign aimed to achieve the following objectives: Build awareness : Increase awareness of free at-home test kits in the selected communities, with a focus on underserved and historically marginalized populations. Place test kits in hands : Facilitate online test kit orders or local pickup by community members. Inspire short-term behavior change : Encourage test kit use at regular intervals (3 times weekly) for 4 weeks, even if no symptoms of COVID-19 are present. Promote health and safety : Educate participants about safety precautions in the event of a positive or negative test result and empower them to take appropriate next steps for the health of themselves, their family, and the community. Present additional research opportunity : Inform participants about the opportunity to participate in an optional research study evaluating health behavior in a way that does not detract from the primary public health initiative. Population To align with deployment of the public health initiative, our campaign’s target audience was adult residents of selected communities in North Carolina (Greenville, Pitt County) and Tennessee (Chattanooga, Hamilton County), with a focus on underserved and historically marginalized populations. Details of the community selection process are described in the literature. The communication campaign focused on the city of Chattanooga versus the entire population of Hamilton County. The goal was to distribute test kits to 40 000 households per community (each participating household was given enough tests for up to 2 household members), or an estimated 25% of the population. The highest priority audiences were those with a greater COVID-19 exposure risk, such as unvaccinated community members, essential workers, and those with many points of contact outside the home, along with their family members. In addition to reaching individuals in the community, the campaign focused on engaging community leaders and organizations to spread campaign messages within their communities. Key Concepts To plan and implement the health communication campaign, the SYCT communications team applied principles of social marketing, , which include commercial marketing principles and techniques designed to improve the health and welfare of community members. The team focused on the 4 key elements central to a successful social marketing strategy: product, price, place, and promotion ( Supplemental Table 1 ). Time Period Due to the urgency of the pandemic, the campaign was developed under an accelerated timeline, with launch occurring 4 weeks after selection of the participating communities. The campaign supported 6 weeks of advertising focused on test distribution and an additional 2 to 4 weeks of advertising focused on test use reminders in each community. While the at-home tests were authorized for use in those ≥8 years of age, the campaign was designed to reach adults. The campaign ran from March 24 to June 4, 2021, in Pitt County and May 3 to July 2, 2021, in Chattanooga. Branding and Messaging The SYCT communications team partnered with communication strategists, program leaders, writers, graphic designers, and a creative agency to develop a campaign name and logo. The name “Say Yes! COVID Test” was selected because of its positive sentiment, clear call to action, direct link to the campaign objective, and acceptable translation into Spanish, a high-priority target audience. To engage residents of the selected communities, imagery used included local landmarks and landscapes, along with people reflective of the target audience with diversity in age, gender, race, and ethnicity. Program collateral was co-branded with the local public health department name and logo. During the campaign, messaging evolved from a call to “join the at-home testing challenge” to personal stories of “why I test” from local community leaders, to a focus on the free limited time offer, and on to educational messages designed to dispel misinformation and build trust . Campaign Websites A central campaign website (sayyescovidtest.org) was created to provide a holistic view of the program and serve as a portal to the community-specific websites, which were designed with the primary objective of getting test kits into the hands of local community members. Test kits could be ordered online or picked up from community partner distribution sites ( Supplemental Figure 1 ). The proportion of test kits to be distributed locally versus online ordering was not predetermined and remained flexible. The websites also provided testing recommendations and shared information about the opportunity to participate in an optional SYCT research study to evaluate health behavior. Spanish versions of the content were available by clicking a language selection button on the website. Public Relations and Earned Media The CDC and NIH initiated the campaign launch with a co-led press release announcing the SYCT program in 2 communities. The Pitt County Health Department issued a press release announcing the availability of free, at-home, rapid COVID-19 test kits and hosted a press conference on launch day, which was timed to coincide with their weekly COVID-19 press briefing. The Hamilton County Health Department announced their participation shortly after, using the same strategy. Local health department directors served as the primary spokespeople for the campaign and were supported by community leaders who spoke about the importance of testing and shared their testing stories. Press releases were issued throughout the campaign to announce distribution milestones and events as well as to close the program and thank community partners. Digital Advertising Geotargeting was used for digital advertising to restrict ads by ZIP code. Google search ads were utilized; keywords of interest included “COVID testing near me,” “rapid home COVID test,” and “home COVID test kit.” Digital ads promoting SYCT were run on Facebook, Instagram, YouTube, streaming television, and streaming radio. These ads included 3 promotional video concepts (15-30 seconds long), motion animations, and images. Social Media Before campaign launch, the SYCT communications team identified local influencers, businesses, and groups with the largest amount of social media followers for their geographical area. The team contacted these influencers and groups to inform them of the mission of SYCT and request their support in posting and sharing campaign content. A social media toolkit created for the campaign provided sample posts and images for sharing ( Supplemental File 1 ). English and Spanish versions were circulated for influencers and local community partners. The campaign was supported by a social media presence on Facebook, Instagram, and Twitter, including local Facebook pages for both communities. NextDoor was added as a channel after program launch in response to NIH and CDC recommendations based on their success with the platform in other public health initiatives. We activated Snapchat ads late in the campaign in an attempt to engage the 18 to 24 age group. Facebook was the priority channel for organic social media. Local, tailored content was posted weekly to promote SYCT messages and test kit distribution events. Out-of-Home Advertising SYCT advertisements were featured on local billboards, buses, bus shelters, and windows of local businesses. We provided brightly colored outdoor canopy tents and feather flag signs to draw attention to distribution sites. The campaign also used ads on gas station televisions, convenience store checkout digital displays, and screens in healthcare facilities. Furthermore, we employed paid outreach teams to hang SYCT door-hanger ads on residential households. Residents also received a direct mailer with program information. Paid Media: Television, Radio, and Newspaper Geotargeting for broadcast television and radio was done via designated market areas. For newspaper and radio advertisements, we established local media partnerships, with a particular focus on Black and Hispanic-owned media. We ran SYCT ads in both print and online local publications, along with radio ads that were a mix of recorded audio and live reads by local hosts. Metrics Evaluation We monitored website metrics (via Google Analytics), performance of digital advertisements, and online test kit orders weekly. Conversion rates, measured digitally, indicate the percentage of visitors or viewers who took the desired action of clicking to order a test kit. In addition, a market research study was conducted to evaluate awareness of the SYCT initiative and usage of the tests in Pitt and Hamilton Counties. Relevant questions included: “Are you aware that Pitt/Hamilton County Public Health is providing at-home test kits to households for free?”; “How did you hear about the at-home test kit program?”; and “How often have you used the at-home tests?” Say Yes! COVID Test (SYCT) is a public health initiative supported by local health departments and government, community, and academic groups. The goal of SYCT was to determine whether frequent at-home rapid antigen testing for SARS-CoV-2 infection could decrease community spread of the virus by triggering early isolation and other precautions for infected individuals, including those who were asymptomatic. The health communication campaign aimed to achieve the following objectives: Build awareness : Increase awareness of free at-home test kits in the selected communities, with a focus on underserved and historically marginalized populations. Place test kits in hands : Facilitate online test kit orders or local pickup by community members. Inspire short-term behavior change : Encourage test kit use at regular intervals (3 times weekly) for 4 weeks, even if no symptoms of COVID-19 are present. Promote health and safety : Educate participants about safety precautions in the event of a positive or negative test result and empower them to take appropriate next steps for the health of themselves, their family, and the community. Present additional research opportunity : Inform participants about the opportunity to participate in an optional research study evaluating health behavior in a way that does not detract from the primary public health initiative. To align with deployment of the public health initiative, our campaign’s target audience was adult residents of selected communities in North Carolina (Greenville, Pitt County) and Tennessee (Chattanooga, Hamilton County), with a focus on underserved and historically marginalized populations. Details of the community selection process are described in the literature. The communication campaign focused on the city of Chattanooga versus the entire population of Hamilton County. The goal was to distribute test kits to 40 000 households per community (each participating household was given enough tests for up to 2 household members), or an estimated 25% of the population. The highest priority audiences were those with a greater COVID-19 exposure risk, such as unvaccinated community members, essential workers, and those with many points of contact outside the home, along with their family members. In addition to reaching individuals in the community, the campaign focused on engaging community leaders and organizations to spread campaign messages within their communities. To plan and implement the health communication campaign, the SYCT communications team applied principles of social marketing, , which include commercial marketing principles and techniques designed to improve the health and welfare of community members. The team focused on the 4 key elements central to a successful social marketing strategy: product, price, place, and promotion ( Supplemental Table 1 ). Due to the urgency of the pandemic, the campaign was developed under an accelerated timeline, with launch occurring 4 weeks after selection of the participating communities. The campaign supported 6 weeks of advertising focused on test distribution and an additional 2 to 4 weeks of advertising focused on test use reminders in each community. While the at-home tests were authorized for use in those ≥8 years of age, the campaign was designed to reach adults. The campaign ran from March 24 to June 4, 2021, in Pitt County and May 3 to July 2, 2021, in Chattanooga. The SYCT communications team partnered with communication strategists, program leaders, writers, graphic designers, and a creative agency to develop a campaign name and logo. The name “Say Yes! COVID Test” was selected because of its positive sentiment, clear call to action, direct link to the campaign objective, and acceptable translation into Spanish, a high-priority target audience. To engage residents of the selected communities, imagery used included local landmarks and landscapes, along with people reflective of the target audience with diversity in age, gender, race, and ethnicity. Program collateral was co-branded with the local public health department name and logo. During the campaign, messaging evolved from a call to “join the at-home testing challenge” to personal stories of “why I test” from local community leaders, to a focus on the free limited time offer, and on to educational messages designed to dispel misinformation and build trust . A central campaign website (sayyescovidtest.org) was created to provide a holistic view of the program and serve as a portal to the community-specific websites, which were designed with the primary objective of getting test kits into the hands of local community members. Test kits could be ordered online or picked up from community partner distribution sites ( Supplemental Figure 1 ). The proportion of test kits to be distributed locally versus online ordering was not predetermined and remained flexible. The websites also provided testing recommendations and shared information about the opportunity to participate in an optional SYCT research study to evaluate health behavior. Spanish versions of the content were available by clicking a language selection button on the website. The CDC and NIH initiated the campaign launch with a co-led press release announcing the SYCT program in 2 communities. The Pitt County Health Department issued a press release announcing the availability of free, at-home, rapid COVID-19 test kits and hosted a press conference on launch day, which was timed to coincide with their weekly COVID-19 press briefing. The Hamilton County Health Department announced their participation shortly after, using the same strategy. Local health department directors served as the primary spokespeople for the campaign and were supported by community leaders who spoke about the importance of testing and shared their testing stories. Press releases were issued throughout the campaign to announce distribution milestones and events as well as to close the program and thank community partners. Geotargeting was used for digital advertising to restrict ads by ZIP code. Google search ads were utilized; keywords of interest included “COVID testing near me,” “rapid home COVID test,” and “home COVID test kit.” Digital ads promoting SYCT were run on Facebook, Instagram, YouTube, streaming television, and streaming radio. These ads included 3 promotional video concepts (15-30 seconds long), motion animations, and images. Before campaign launch, the SYCT communications team identified local influencers, businesses, and groups with the largest amount of social media followers for their geographical area. The team contacted these influencers and groups to inform them of the mission of SYCT and request their support in posting and sharing campaign content. A social media toolkit created for the campaign provided sample posts and images for sharing ( Supplemental File 1 ). English and Spanish versions were circulated for influencers and local community partners. The campaign was supported by a social media presence on Facebook, Instagram, and Twitter, including local Facebook pages for both communities. NextDoor was added as a channel after program launch in response to NIH and CDC recommendations based on their success with the platform in other public health initiatives. We activated Snapchat ads late in the campaign in an attempt to engage the 18 to 24 age group. Facebook was the priority channel for organic social media. Local, tailored content was posted weekly to promote SYCT messages and test kit distribution events. SYCT advertisements were featured on local billboards, buses, bus shelters, and windows of local businesses. We provided brightly colored outdoor canopy tents and feather flag signs to draw attention to distribution sites. The campaign also used ads on gas station televisions, convenience store checkout digital displays, and screens in healthcare facilities. Furthermore, we employed paid outreach teams to hang SYCT door-hanger ads on residential households. Residents also received a direct mailer with program information. Geotargeting for broadcast television and radio was done via designated market areas. For newspaper and radio advertisements, we established local media partnerships, with a particular focus on Black and Hispanic-owned media. We ran SYCT ads in both print and online local publications, along with radio ads that were a mix of recorded audio and live reads by local hosts. We monitored website metrics (via Google Analytics), performance of digital advertisements, and online test kit orders weekly. Conversion rates, measured digitally, indicate the percentage of visitors or viewers who took the desired action of clicking to order a test kit. In addition, a market research study was conducted to evaluate awareness of the SYCT initiative and usage of the tests in Pitt and Hamilton Counties. Relevant questions included: “Are you aware that Pitt/Hamilton County Public Health is providing at-home test kits to households for free?”; “How did you hear about the at-home test kit program?”; and “How often have you used the at-home tests?” We executed an advertising campaign with a diverse mix of marketing channels. We spent $528 446 across both communities, which resulted in over 25 million estimated impressions . A total of 26 582 free test kits were distributed in Pitt County and 39 453 in Hamilton County, equaling a combined 1.6 million tests across both communities. Campaign Websites Most website traffic came from the target metro areas (55% for Pitt County, 66% for Chattanooga). Conversions (clicks to order a test) were also primarily from the target communities (68% for Pitt County, 75% for Chattanooga). The proportion of sessions from mobile devices was similar in both communities (74% for Pitt County, 72% for Chattanooga) and was even higher for paid traffic (83% for Pitt County, 82% for Chattanooga). English was the browser language for 99% of users in Pitt County and 98% in Chattanooga. The Spanish websites resulted in 7 conversions for Pitt County and 15 for Chattanooga. Paid traffic was the largest acquisition source, followed by direct and social . Organic search and direct traffic had the highest conversion rates for Pitt County (53% and 51%, respectively), while email and referral traffic had the highest conversion rates for Chattanooga (44% and 40%, respectively). Public Relations and Earned Media The media strategy generated awareness and interest in SYCT on a national, state, and local level, and led to an influx of test kit orders at launch and throughout the campaign. The program was covered in 18 print/digital articles in Pitt County, 2 in Chattanooga, and a total of 51 including national coverage. It was mentioned in 26 local radio/broadcast segments in Pitt County, 19 in Chattanooga, and a total of 86 including national coverage. Digital Advertising Digital advertising made up the largest component of the campaign . The average conversion rates across digital channels were 7.9% of clicks in Pitt County and 11% in Chattanooga resulting in a test kit order from the program’s website . Google search ads had the highest conversion rates of digital ads in both Pitt County (32%) and Chattanooga (31%), which was expected due to the user intent of seeking COVID-19 testing. However, the majority of test kits (52% in Pitt County and 63% in Hamilton County) were distributed locally by community partners (eg, religious organizations, local businesses) and not via online ordering. The promotional videos had an average completion rate of 87%. Social Media Of all web traffic resulting from digital advertising, Facebook/Instagram ads were responsible for 80% in Pitt County and 65% in Chattanooga. Of digital ad conversions, Facebook/Instagram ads were responsible for 82% in Pitt County and 46% in Chattanooga. Engagement with Facebook ads was highest in the ≥65 year-old age group and the lowest in the 18 to 24 and 25 to 34 age groups. We observed increases in website traffic coinciding with boosted Facebook posts. “Why I Test” personal stories performed best in Pitt County, whereas informational boosted posts such as “comfortable and safe” and “you’re in control” received more clicks in Chattanooga . Video ads on Facebook had stronger click-thru and engagement rates when compared with still images. Of the still images, the 2-3-0 graphic performed well. On Facebook, there were 30 505 total post clicks and 1651 total reactions, comments, and shares across communities. Tweets resulted in 194 300 impressions and 557 link clicks. On Instagram, program accounts gained 31 followers. Across both communities, about half of the 50 local influencers we contacted participated in the campaign. NextDoor performed well in Chattanooga (18% engagement) but not in Pitt County (1.4% engagement). Snapchat ads were unsuccessful (0.5% engagement). Out-of-Home Advertising Out-of-home advertising was the second largest campaign spend . We purchased mailing lists and sent promotional postcards to 81 923 households in Pitt County and 100 765 in Chattanooga. Paid outreach teams hung door-hanger advertisements at 49 000 residences in Pitt County and 99 000 in Chattanooga. In total across both communities, we ran advertisements on 5 billboards, 16 bus shelters, the sides of 13 buses, windows of ~110 retailers, televisions in pumps at 66 gas stations, 27 convenience store checkout digital displays, and 37 medical office waiting room screens. These tactics resulted in combined estimated impressions of more than 1.3 million in Pitt County and 2.2 million in Chattanooga. Most website traffic came from the target metro areas (55% for Pitt County, 66% for Chattanooga). Conversions (clicks to order a test) were also primarily from the target communities (68% for Pitt County, 75% for Chattanooga). The proportion of sessions from mobile devices was similar in both communities (74% for Pitt County, 72% for Chattanooga) and was even higher for paid traffic (83% for Pitt County, 82% for Chattanooga). English was the browser language for 99% of users in Pitt County and 98% in Chattanooga. The Spanish websites resulted in 7 conversions for Pitt County and 15 for Chattanooga. Paid traffic was the largest acquisition source, followed by direct and social . Organic search and direct traffic had the highest conversion rates for Pitt County (53% and 51%, respectively), while email and referral traffic had the highest conversion rates for Chattanooga (44% and 40%, respectively). The media strategy generated awareness and interest in SYCT on a national, state, and local level, and led to an influx of test kit orders at launch and throughout the campaign. The program was covered in 18 print/digital articles in Pitt County, 2 in Chattanooga, and a total of 51 including national coverage. It was mentioned in 26 local radio/broadcast segments in Pitt County, 19 in Chattanooga, and a total of 86 including national coverage. Digital advertising made up the largest component of the campaign . The average conversion rates across digital channels were 7.9% of clicks in Pitt County and 11% in Chattanooga resulting in a test kit order from the program’s website . Google search ads had the highest conversion rates of digital ads in both Pitt County (32%) and Chattanooga (31%), which was expected due to the user intent of seeking COVID-19 testing. However, the majority of test kits (52% in Pitt County and 63% in Hamilton County) were distributed locally by community partners (eg, religious organizations, local businesses) and not via online ordering. The promotional videos had an average completion rate of 87%. Of all web traffic resulting from digital advertising, Facebook/Instagram ads were responsible for 80% in Pitt County and 65% in Chattanooga. Of digital ad conversions, Facebook/Instagram ads were responsible for 82% in Pitt County and 46% in Chattanooga. Engagement with Facebook ads was highest in the ≥65 year-old age group and the lowest in the 18 to 24 and 25 to 34 age groups. We observed increases in website traffic coinciding with boosted Facebook posts. “Why I Test” personal stories performed best in Pitt County, whereas informational boosted posts such as “comfortable and safe” and “you’re in control” received more clicks in Chattanooga . Video ads on Facebook had stronger click-thru and engagement rates when compared with still images. Of the still images, the 2-3-0 graphic performed well. On Facebook, there were 30 505 total post clicks and 1651 total reactions, comments, and shares across communities. Tweets resulted in 194 300 impressions and 557 link clicks. On Instagram, program accounts gained 31 followers. Across both communities, about half of the 50 local influencers we contacted participated in the campaign. NextDoor performed well in Chattanooga (18% engagement) but not in Pitt County (1.4% engagement). Snapchat ads were unsuccessful (0.5% engagement). Out-of-home advertising was the second largest campaign spend . We purchased mailing lists and sent promotional postcards to 81 923 households in Pitt County and 100 765 in Chattanooga. Paid outreach teams hung door-hanger advertisements at 49 000 residences in Pitt County and 99 000 in Chattanooga. In total across both communities, we ran advertisements on 5 billboards, 16 bus shelters, the sides of 13 buses, windows of ~110 retailers, televisions in pumps at 66 gas stations, 27 convenience store checkout digital displays, and 37 medical office waiting room screens. These tactics resulted in combined estimated impressions of more than 1.3 million in Pitt County and 2.2 million in Chattanooga. The health communication campaign was effective in raising awareness and facilitating test kit orders and local pickup. According to market research, 79.8% of respondents in Pitt County and 74.8% in Hamilton County were aware of the availability of home COVID-19 tests. A little more than half of respondents in both counties were aware that they could receive free test kits through SYCT. In both counties, awareness was highest among Black respondents. While awareness was high, market research also indicated that our campaign may have reached a saturation threshold in that some residents were aware of the program but did not want a free test kit. The stage of the pandemic also likely influenced test kit demand. For example, the more transmissible COVID-19 Delta variant was circulating during the SYCT Chattanooga initiative, which had greater test kit distribution within the same campaign duration. The marketing plan for SYCT was developed to work in tandem with the project’s community engagement plan. In brief, it was a multipronged strategy that engaged local health departments and community organizations. This coordinated approach allowed us to make real-time modifications to community engagement/outreach and marketing based on uptake of the test kits and other project variables (eg, research study enrollment). Local health departments have a long history of partnering with community groups to promote the health of people in their areas. The local health departments involved in SYCT introduced the program to highly engaged and connected community members, which was critical to the success of the program. Community partner organizations distributed most of the test kits, illustrating the value of employing local channels that were most familiar to residents. Additionally, partner organizations’ leaders and constituents contributed significantly to the campaign by acting as spokespeople and sharing their personal stories, which were profiled by local and national media. In terms of driving online test kit orders, the campaign was more successful in Chattanooga than Pitt County. This may be due in part to the stage of the pandemic. In addition, the accelerated timeline for launch and resulting decreased planning time led to a slower campaign rollout in Pitt County, while Chattanooga benefited from the initial lessons learned along with additional lead time to reserve ad space and make connections with community partners. Chattanooga was also a larger market with more advertising opportunities. Launching all advertising channels from day 1 in Chattanooga coincided with a large initial spike in test kit orders. At the health department’s suggestion, a telephone number was included on marketing materials in Chattanooga, which helped drive orders. The health department reported fielding 90 to 100 calls per day for test kit orders during the height of the campaign. Overall, we observed that no one marketing tool was the most effective and that different channels helped reach different subpopulations. For example, while Facebook drove the most traffic of all digital advertising, it skewed female and older. Although the public health initiative did not collect demographic data from residents who ordered test kits, the market research studies showed higher reported awareness of and participation in the program by minorities, which was a goal of the campaign. We do not have data to indicate whether there were differences in participation based on socioeconomic status or access to the internet, which were noted as barriers in the Liverpool testing program. However, we sought to minimize disparities in distribution by employing a wide variety of marketing channels, having in-person pickup and telephone ordering as options, and mounting a robust community engagement effort. Young, white males were difficult to reach. To increase participation among this group, future campaigns might explore an organic presence on Reddit or TikTok. Executing a campaign about COVID-19 testing was challenging in several ways. COVID-19 is a polarizing and political topic where misinformation is rampant. , Campaign social media posts attracted trolls and energized debate that required close monitoring and careful moderation. In addition, advertising policies kept evolving during the pandemic. On some channels, including Instagram and Facebook, SYCT ads were automatically taken down on numerous occasions, and we had to submit appeals to get them reinstated. Some channels, such as TikTok, were not allowing any COVID-19–related advertising. A few local publications also turned away campaign ads. Being a government-supported initiative brought some criticism and distrust, as indicated by social media comments. This challenge was also noted with surveillance testing programs in the United Kingdom. , In a 2020 survey on vaccine hesitancy, approximately 66% of Black Americans and 43% of Latino community members indicated that the government can rarely or never be trusted to look after their interests. Black Americans were also twice as likely to trust a messenger of their own racial/ethnic group compared with a White counterpart. In campaign branding and messaging, we focused on delivery of tests by the local health departments as opposed to the national government sponsors. We also built partnerships with local minority community leaders, such as ministers and town council members, who volunteered to serve as communications allies and help share the campaign with their communities. Future efforts would benefit by gathering feedback from community stakeholders earlier and throughout the campaign. Orders from the Spanish SYCT websites were low. Future campaigns could do more to address the Hispanic/Latino population specifically. The compressed timeline prevented us from conducting formative research to inform message and campaign development. However, data from other testing research suggested that appealing to an individual’s desire to protect their family and community is motivating, along with offering peace of mind. , In addition, tailoring messages by harnessing a connection to an individual’s identity can enhance effectiveness in specific subpopulations. Our campaign was not successful in getting test kit recipients to test regularly several times a week. Meta-analyses have found that the effectiveness of mass media health campaigns on behavior change varies by target behavior, and other moderators of campaign effectiveness remain unclear or inconsistent. Further study is needed to explore barriers to frequent COVID-19 self-testing. Evaluation using an approach such as theory of change may help provide greater insight into causal connections and contextual factors influencing this outcome. A health communication campaign to encourage use of rapid, at-home antigen testing in underserved and historically marginalized communities, combined with robust community engagement, was successful in building awareness and getting test kits into the hands of community members. More research is needed to understand test kit use patterns and how to support frequent at-home testing. Overall, we observed that no one marketing tool was the most effective in increasing awareness and test kit orders/pickup, and that different channels helped reach different subpopulations. Marketing efforts should be scaled up or down based on the stage of the pandemic, anticipated demand for test kits, and available advertising dollars. Similar programs with limited budgets can apply these findings to help select the most cost-effective communications channels. Furthermore, future campaigns should consider integrating complementary protective health behaviors, such as masking, hand washing, physical distancing, and vaccination. We found that events offering both vaccination and test kits were effective in providing multiple protective measures at the same time, with the same manpower. We also found that demand for test kits outlasted the SYCT campaign duration, suggesting that health departments, community organizations, and policymakers should look for ways to provide free test kits outside of a particular campaign window. Lessons learned from the marketing of this initiative can be applied to other public health programs that seek to engage underserved communities. sj-docx-1-inq-10.1177_00469580221146046 – Supplemental material for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities Click here for additional data file. Supplemental material, sj-docx-1-inq-10.1177_00469580221146046 for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities by Lindsay Singler, Gina Uhlenbrauck, Giselle Corbie-Smith, Al Richmond, Amy Hattem, Kristen Linney and Michael Cohen-Wolkowiez in INQUIRY: The Journal of Health Care Organization, Provision, and Financing sj-docx-2-inq-10.1177_00469580221146046 – Supplemental material for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities Click here for additional data file. Supplemental material, sj-docx-2-inq-10.1177_00469580221146046 for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities by Lindsay Singler, Gina Uhlenbrauck, Giselle Corbie-Smith, Al Richmond, Amy Hattem, Kristen Linney and Michael Cohen-Wolkowiez in INQUIRY: The Journal of Health Care Organization, Provision, and Financing sj-docx-3-inq-10.1177_00469580221146046 – Supplemental material for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities Click here for additional data file. Supplemental material, sj-docx-3-inq-10.1177_00469580221146046 for Say Yes! COVID Test: A Health Communication Campaign to Encourage Use of Rapid, At-Home Antigen Testing in Underserved and Historically Marginalized Communities by Lindsay Singler, Gina Uhlenbrauck, Giselle Corbie-Smith, Al Richmond, Amy Hattem, Kristen Linney and Michael Cohen-Wolkowiez in INQUIRY: The Journal of Health Care Organization, Provision, and Financing |
Externally validated and clinically useful machine learning algorithms to support patient-related decision-making in oncology: a scoping review | 5397092c-fe1e-4efe-a9c6-d428eb1b58b3 | 11843972 | Internal Medicine[mh] | Finding a cure for cancer, in its many forms, is still a tremendously complex problem. Despite continuous advances in understanding its biological foundations and the emergence of new treatment possibilities, this disease is still the world's second-leading cause of death , causing an enormous socioeconomic burden and an immense workload for physicians . As part of the procedures to diagnose and treat patients with cancer, practitioners collect massive amounts of data, including clinical notes, previous conditions, diagnoses, treatments, prescriptions, laboratory test results, radiological images, and phenotypic and genotypic features. Along with any prior patient-specific knowledge in the same or other healthcare contexts, this information is increasingly stored in virtual collections – electronic health records (EHRs) . Notwithstanding the potential of this digitization, the resulting exponential, ever-increasing data expansion – both in volume and complexity – has inevitably shortened the time for clinicians to learn, follow emerging clinical guidelines, and gather all relevant information for proper care . Indeed, with a single patient estimated to generate up to 8 Gb of raw input ranging from unstructured clinical narratives to scanned documents , automated techniques have undeniably become required to distill insight from EHRs and assist in decision-making. In that vein, machine learning (ML) – a branch of Artificial Intelligence (AI) with the ability to learn from and identify patterns in the available data – is increasingly used in healthcare to model patient-specific predictive, prognostic, or prescriptive assessments at the point of decision-making . In this context, ML models can be deployed as standalone applications or fall into several technologies, such as clinical decision support (CDS) and computer-aided detection (CADe) or diagnosis (CADx) systems . The main difference between these tools concerns the type of data used for model development: while CADx and CADe approaches rely on imaging, CDS systems usually involve text-based information, such as test results, comorbidities, patient history, and other relevant clinical variables . Machine learning can be divided into two subtypes, supervised and unsupervised learning, separated by the use of labeled or unlabeled datasets . On the one hand, supervised learning models – e.g., support vector machines (SVMs), gradient boosting (GB), random forests (RF), and logistic regression – correlate previously organized features (such as unique patient characteristics) with known outcomes . This approach deals with two types of problems: (i) classification, to produce discrete outputs (or classes), for example, to predict tumor malignancy ; and (ii) regression, to estimate continuous values . In healthcare, regression algorithms can be used, for instance, to determine the risk of developing lesions or sequelae over time ; or to establish an adequate dose of medication to administer to a specific patient . On the other hand, unsupervised learning methods focus on finding natural patterns in unlabeled data . These models – including principal component analysis (PCA), k-means, gaussian mixture models, density-based spatial clustering of applications with noise (DBSCAN), and balanced iterative reducing and clustering (BIRCH) – are used to find relationships between variables, assign them to different groups according to their similarities (clustering), and to prioritize and reduce the number of features in the dataset (dimensionality reduction) . A specific set of methods, artificial neural networks (ANNs), has even been the basis for a subcategory of machine learning termed deep learning (DL) . Designed to (partially) emulate human neuronal processing, ANNs are composed of artificial neurons (or nodes), interconnected and stacked into three types of layers : (i) input layer , containing the original dataset variables; (ii) hidden layer(s) , where the data is processed at a certain level of representation ; and (iii) output layer , with the attained results. In contrast to standard ANNs, usually limited to one hidden layer and still requiring labeled features , deep neural networks (DNNs) can derive knowledge from two or more increasing levels of abstraction, with their depth growing along with the number of hidden layers . DNNs (e.g., convolutional and recurrent neural networks) can accurately detect and classify patterns in complex labeled or unlabeled datasets , having produced ground-breaking results in numerous areas, including image, pattern, and language recognition . Over the years, several ML- and DL-based tools have been developed to support clinical decision-making in oncology, with many reported benefits. First, these methods can accurately predict cancer susceptibility, recurrence, survival, and risk of complications according to multiple constraints and therapeutic paths . Second, these can be employed in gene expression analysis to predict mutations, proving useful in targeted gene therapy . Third, artificially intelligent approaches in imaging analysis are usually used for tumor monitoring, detection (CADe), segmentation, diagnosis (CADx), and staging . In particular, ML can be paired with radiomics, a quantitative imaging approach that deconstructs medical images into mineable features. ML-based radiomic pipelines, most commonly applied in oncology , are usually composed of four sequential stages : (i) image retrieval and segmentation, to delineate regions or volumes of interest (for two- or three-dimensional images, respectively); (ii) high-dimensional quantitative feature extraction, to unravel tumor pathophysiology into measurable biomarkers, such as size, volume, texture, shape, and intensity; (iii) feature reduction, to explore relationships between variables to remove redundant or correlated features; and (iv) prognostic/predictive, to link specific features with possible outcomes. By mapping the whole tumor and its adjacent tissues , this technique allows performing dynamic virtual biopsies, which can be used to capture spatial and temporal intra-tumoral heterogeneity , a key factor linked with tumor aggressiveness and poor treatment responses and survival . These results can be integrated with other available sources of clinical, pathological, and genomic data and leveraged for individualized decision-making to, for example, determine chemo- or radiotherapy doses, treatment-resistant regions, or the best sites to perform an actual biopsy . Finally, efforts have recently been shown to develop digital twins (DTs) for cancer patients . In a medical context, DTs can be described as dynamic virtual replicas modeled intelligently after each physical patient's medical, behavioral, and environmental variables, used for real-time simulations . Here, DTs provide clinical support by non-invasively anticipating treatment responses, predicting drug effectiveness, monitoring health indicators, and detecting abnormalities, thus easing decision-making and avoiding unnecessary costs and ineffective procedures . Because of these unique capabilities, ML-and DL-based approaches have unleashed the potential to revolutionize standard healthcare, especially when made available to practitioners at the point of care. Nonetheless, the overwhelming majority of algorithms developed for cancer-related decisions have yet to reach oncology practice , mainly due to subpar methodological reporting and validation standards . Showing performance on the patients used for development (internal validation) is insufficient, particularly for small sample sizes ; as predictions are modeled after that specific cohort, results can be misleading (e.g., biased or overfitted) and non-generalizable to new case mixes . Thus, before a new or updated artificially intelligent method can be adopted in clinical practice, it must undergo a thorough evaluation process, which usually consists of external (ideally, clinical) validation and the assessment of clinical utility. First, to ensure model reproducibility (or external validity) and increase confidence in its predictions or estimates, its performance should be evaluated in separate, independent, and comprehensive patient datasets representing the intended target setting(s) . Specifically, the following metrics should be reported : (i) calibration, i.e., the ratio between predicted and observed outcomes, ideally revealed graphically in a calibration plot (to depict the whole range of predictions); and (ii) discrimination, that is, the ability to separate individuals with or without the event of interest. For regression models (continuous outcomes), discrimination is usually shown via concordance (C) index or mean absolute or squared error . For classification tasks (discrete/binary outcomes), discrimination metrics can include the area under the ROC curve (AUC), accuracy, sensitivity, specificity, precision (or positive predictive value), or F-score (i.e., Dice similarity) . Second, this information should always be complemented by evaluating clinical utility, i.e., quantifying the impact of the developed tool on decision-making – and, subsequently, on patient outcomes – through comparative analyses . These comparisons include, for example, clinicians performing the same task with or without assistance, patient outcomes before and after implementation, or, although less informative, direct comparison between well-established models developed for the same end . If possible, this evaluation should preferably be carried out in randomized clinical trials (to minimize confounding variables) or, at least, prospective observational studies so that impact may be assessed over time . Lastly, to ensure clinical validity, this process should involve real-world data (RWD), that is, information routinely collected from actual patients (for example, from EHRs and wearable or mobile devices) . In the context of this scoping review, we conducted a preliminary search of the PubMed database regarding externally validated ML models developed for patient-related decision-making in oncology. Firstly, although several decision support-focused publications do continue to emerge, they are either: (i) focused on context-specific applications, such as evaluation for a specific type of tumor or field ; (ii) not approached from a machine learning perspective, that is, not stating which algorithms were used, their performance or their clinical validity ; or (iii) outdated . Secondly, only two reports concerning external validation in oncology were found, focused on shortcomings, lack of reporting standards, and risk of bias . However, none of these reviews report: (i) which were the externally validated algorithms; (ii) how the validation studies were designed and their target populations; (ii) if performance was compared against expert clinicians or gold standards and using real-world data; and (iii) if any links can be made between specific algorithms and cancer variants. Since processing mechanisms for different cancer types can be contrasting, addressing the last issue is particularly relevant. Stellar examples are melanomas and other tumors requiring imaging analysis, which have proven to be accurately identified with neural networks (see , for example). Accordingly, further connections could and should be made between different types of cancer and frameworks with specific ML techniques. For the reasons abovementioned, we conducted a scoping review to systematically map externally validated and clinically useful ML-based models developed for patient-related decision-making in the broad scope of oncology practice. Namely, we aimed to report on their validation and impact on decision-making (clinical utility), attempt to associate specific models with particular types of cancer and decisions to make by quantifying their performance, and unveil research gaps in this field. We hope that our findings can be translated into efficient implementations. The ultimate goal is to simplify the decision process and reduce misprescriptions, thus lowering clinicians' workload, increasing confidence, and avoiding misuse and malapplied AI, potentially leading to better healthcare. The remainder of this paper is organized as follows. Section " " details the methodological approach for the scoping review, including the databases and search terms used and the inclusion and exclusion criteria for data analyses. Section " " provides a critical qualitative and quantitative synthesis of external validation and clinical utility assessment. Finally, Section " " concerns the discussion, where feasible associations between particular types of cancer and ML algorithms are established, the limitations currently faced by ML are outlined, research gaps are presented, conclusions are drawn, and guidance is provided for further work. This scoping review was conducted according to the updated Joanna Briggs Institute (JBI) methodology for scoping reviews , which we also used to develop our protocol (see Additional file ). In addition, it followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) checklist, adapted to encompass the PRISMA 2020 statement (the filled PRISMA2020 Checklist is provided in Additional file ). Furthermore, as recommended by JBI , the Population/Concept/Context (PCC) mnemonic guided the identification of the main concepts, research questions, and search strategy in this review. Here, the population consists of cancer patients (with no restrictions). The concept is externally validated and clinically useful machine or deep learning algorithms to assist decision-making regarding clinical outcomes for cancer patients. The context is oncology care in any setting. Our methodology (and PCC elements) is described in detail in our protocol and summarily presented below. Research questions The research questions and sub-questions were outlined as follows: What externally validated machine learning algorithms have been developed to assist patient-related decision-making in oncology practice? ◦ For what types of cancer variants and clinical outcomes were these models developed? ◦ How were the validation studies designed? ◦ Which populations and types of inputs were used? ◦ Have these methods been tested on real-world data? ◦ Have the models been implemented in clinical practice? ◦ How was performance assessed during external validation? How was clinical utility established for these methods? ◦ Which comparators and metrics have been used? Which machine learning algorithms show the best performance depending on the type of cancer, clinical modalities, and the decision(s) to be made? ◦ What are the reported effects of these ML-based models on decision-making and outcomes? What are the research gaps in this field? Types of sources and search strategy This scoping review considered quantitative experimental, quasi-experimental, and observational study designs, including randomized and non-randomized controlled trials, before and after studies, prospective and retrospective cohort studies, and any additional relevant quantitative and comparative research frameworks. Conference abstracts, qualitative studies, and secondary research designs (such as reviews, editorials, letters, and book chapters) were not considered due to not typically reporting individual (if any) performance metrics, thus impeding quantitative analyses. Grey literature was also not included. To limit the scope of this review and increase reproducibility, it only encompassed peer-reviewed journal articles with institutional or open full-text access. Furthermore, to ensure quality and reliable reporting, papers were only assessed for eligibility if published in journals whose Scimago Journal and Country Rank (SJR, 2021), an indicator of scientific journal prestige, is higher than one and whose best quartile is Q1 . The search strategy aimed to locate primary research papers published in peer-reviewed journals. As suggested by JBI, a 3-step search strategy was executed. First, the first author undertook a limited search of PubMed to identify articles on the topic. As a result of this search, keywords were divided into three categories: machine-learning-based decision-making ( " machine learning " OR " deep learning " OR " classification " OR " regression " OR " clinical decision support " OR " computer-aided diagnosis " OR " computer-aided detection " OR " digital twin(s) " OR " decision-making " ), cancer ( " cancer " OR " oncology " OR " tumor(s) " OR " neoplasm(s) " OR " malignancy " ), and evaluation ( " comparison " OR " performance " OR " valid* " ). This search strategy and the inclusion criteria were deliberately designed without imposing limitations on ML, patient profiles, or specific cancer-related settings, ensuring the inclusion of a wide range of relevant papers and maximizing the comprehensiveness of the review. Second, these keywords were used to develop a complete search strategy for the EMBASE, IEEE Xplore, PubMed, Scopus, and Web of Science databases. The search terms were adapted to each database (see Additional file ). This study selected IEEE Xplore to address computing articles, PubMed and EMBASE to include biomedical literature, and Scopus and Web of Science to cover multidisciplinary reports. Only publications written in English were considered for inclusion. Studies published from January 1, 2014 were searched, as this year aligns with when deep learning became mainstream . Third, the reference list of all included sources of evidence was screened for additional studies. Eligibility criteria This review included new or updated externally validated machine or deep learning algorithms to assist decision-making regarding clinical outcomes for cancer patients, with no restrictions regarding cancer types or specific demographics. Samples could consist of human patients or lesions (for image analysis), provided that the focus was on cancer patient outcomes and data routinely available in clinical settings were used. All commonly known machine learning algorithms and digital twin approaches were considered, as these align with clinical prediction models. Although not universally qualified as an external assessment , papers reporting model performance on temporally different datasets (temporal validation) were also included. The assessment of clinical utility was mandatory, but all clinical comparators were included (e.g., comparison against standard care, before-after studies, and clinician performance with and without the tool, among many others). Studies were discarded if they: Were not primary research articles published in peer-reviewed journals whose SJR was equal to or higher than one. This criterion was established to ensure the inclusion of research from sources recognized for their quality and impact, thereby enhancing the reliability and relevance of the synthesized evidence. Used synthetic patients or animals. This restriction was imposed to prioritize real-world applicability in clinical settings, where outcomes and decisions are based on authentic human patient data. Although an instrumental resource, synthetic data may not fully encapsulate the complexity and variability inherent in clinical practice . Concerned sequencing, omics, and molecular biomarker discovery. These studies were excluded due to the specialized and currently less accessible nature of omic information in routine clinical settings, a challenge particularly pronounced for proteomics and metabolomics . This review centers on algorithms ready for immediate use in clinical decision-making, aligning with the immediate needs of healthcare practices. Used non-machine learning approaches (for traditional statistical algorithms such as logistic regression and naïve Bayes, these were excluded unless explicitly described as machine learning models); Developed algorithms for anything other than patient care (such as medical education, structured data collection, text classification, cohort-specific assessments, or EHR dashboards); Were not primarily focused on oncology; Did not present performance metrics for external validation (either in the current or previous papers). These metrics are required to verify the algorithms' reliability and generalizability beyond the development environment, a key indicator of their readiness for clinical application. Had not assessed clinical utility. This assessment is critical for demonstrating an algorithm's palpable benefit in improving patient care, an essential aspect of its value to the medical community. Were not written in English. This requirement ensures wide accessibility and comprehension of the review's findings within the global scientific community. Did not have full-text access (inaccessible or inexistent), as this limitation prevents an in-depth analysis of the studies’ methodologies and outcomes. Study selection Following the search, all identified citations were collated in RIS format, uploaded into EndNote 20.4.1 /2022 (Clarivate Analytics, PA, USA), and deduplicated (first electronically, followed by a manual sweep). A Python script was then used to filter publications by SJR ranking (available at Additional file ). The remaining citations were imported into a spreadsheet, and titles and abstracts were screened for assessment against the inclusion criteria for the review. Next, a full-text inspection of the potentially relevant sources was carried out. Disagreements at each stage of the selection process were resolved through discussion among the authors. The search and study inclusion process results are presented in a Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping review (PRISMA-ScR) flow diagram updated per the PRISMA 2020 statement (see Fig. in Results ). Data charting Data were extracted using a data extraction form (available in our protocol – see Additional file ). These data were stored in Excel spreadsheets and included general information and specific details about the participants, concept, context, study methods, and critical findings relevant to the review questions. No modifications were made to the original form. General study characteristics included the first author, title, year of publication, journal, SJR ranking, and whether limitations were reported and any reporting guidelines were followed. The following information was charted from each source: development design (development and validation or validation only), study design (retrospective versus prospective), care type (primary, secondary, tertiary, or quaternary), general and specific cancer type, the study's focus (e.g., survival or diagnosis), best-performing machine learning method(s), task (classification, regression, or both), type of implementation, interface, system classification (e.g., CADx, CDSS), processing time, software, number of institutions in validation, data availability, validation type, data source (i.e., the country from which the data were obtained), population details (age group, number of patients, number of female and male patients, sample type, and sample size), whether independent validation was performed and real-world data were used, which discrimination and calibration metrics were used to evaluate validation performance, and which comparators and metrics were used to assess the models’ clinical utility. The data is presented in tabular and graphical form, accompanied by a narrative summary. All statistical analyses and graphic illustrations were performed using Pandas 1.3.4 and Matplotlib 3.4.3 (Python 3.9.7). Critical appraisal and risk of bias Besides discarding publications whose SJR was lower than one, no other evaluations concerning data quality were carried out, which aligns with the JBI's protocol for scoping reviews . The research questions and sub-questions were outlined as follows: What externally validated machine learning algorithms have been developed to assist patient-related decision-making in oncology practice? ◦ For what types of cancer variants and clinical outcomes were these models developed? ◦ How were the validation studies designed? ◦ Which populations and types of inputs were used? ◦ Have these methods been tested on real-world data? ◦ Have the models been implemented in clinical practice? ◦ How was performance assessed during external validation? How was clinical utility established for these methods? ◦ Which comparators and metrics have been used? Which machine learning algorithms show the best performance depending on the type of cancer, clinical modalities, and the decision(s) to be made? ◦ What are the reported effects of these ML-based models on decision-making and outcomes? What are the research gaps in this field? This scoping review considered quantitative experimental, quasi-experimental, and observational study designs, including randomized and non-randomized controlled trials, before and after studies, prospective and retrospective cohort studies, and any additional relevant quantitative and comparative research frameworks. Conference abstracts, qualitative studies, and secondary research designs (such as reviews, editorials, letters, and book chapters) were not considered due to not typically reporting individual (if any) performance metrics, thus impeding quantitative analyses. Grey literature was also not included. To limit the scope of this review and increase reproducibility, it only encompassed peer-reviewed journal articles with institutional or open full-text access. Furthermore, to ensure quality and reliable reporting, papers were only assessed for eligibility if published in journals whose Scimago Journal and Country Rank (SJR, 2021), an indicator of scientific journal prestige, is higher than one and whose best quartile is Q1 . The search strategy aimed to locate primary research papers published in peer-reviewed journals. As suggested by JBI, a 3-step search strategy was executed. First, the first author undertook a limited search of PubMed to identify articles on the topic. As a result of this search, keywords were divided into three categories: machine-learning-based decision-making ( " machine learning " OR " deep learning " OR " classification " OR " regression " OR " clinical decision support " OR " computer-aided diagnosis " OR " computer-aided detection " OR " digital twin(s) " OR " decision-making " ), cancer ( " cancer " OR " oncology " OR " tumor(s) " OR " neoplasm(s) " OR " malignancy " ), and evaluation ( " comparison " OR " performance " OR " valid* " ). This search strategy and the inclusion criteria were deliberately designed without imposing limitations on ML, patient profiles, or specific cancer-related settings, ensuring the inclusion of a wide range of relevant papers and maximizing the comprehensiveness of the review. Second, these keywords were used to develop a complete search strategy for the EMBASE, IEEE Xplore, PubMed, Scopus, and Web of Science databases. The search terms were adapted to each database (see Additional file ). This study selected IEEE Xplore to address computing articles, PubMed and EMBASE to include biomedical literature, and Scopus and Web of Science to cover multidisciplinary reports. Only publications written in English were considered for inclusion. Studies published from January 1, 2014 were searched, as this year aligns with when deep learning became mainstream . Third, the reference list of all included sources of evidence was screened for additional studies. This review included new or updated externally validated machine or deep learning algorithms to assist decision-making regarding clinical outcomes for cancer patients, with no restrictions regarding cancer types or specific demographics. Samples could consist of human patients or lesions (for image analysis), provided that the focus was on cancer patient outcomes and data routinely available in clinical settings were used. All commonly known machine learning algorithms and digital twin approaches were considered, as these align with clinical prediction models. Although not universally qualified as an external assessment , papers reporting model performance on temporally different datasets (temporal validation) were also included. The assessment of clinical utility was mandatory, but all clinical comparators were included (e.g., comparison against standard care, before-after studies, and clinician performance with and without the tool, among many others). Studies were discarded if they: Were not primary research articles published in peer-reviewed journals whose SJR was equal to or higher than one. This criterion was established to ensure the inclusion of research from sources recognized for their quality and impact, thereby enhancing the reliability and relevance of the synthesized evidence. Used synthetic patients or animals. This restriction was imposed to prioritize real-world applicability in clinical settings, where outcomes and decisions are based on authentic human patient data. Although an instrumental resource, synthetic data may not fully encapsulate the complexity and variability inherent in clinical practice . Concerned sequencing, omics, and molecular biomarker discovery. These studies were excluded due to the specialized and currently less accessible nature of omic information in routine clinical settings, a challenge particularly pronounced for proteomics and metabolomics . This review centers on algorithms ready for immediate use in clinical decision-making, aligning with the immediate needs of healthcare practices. Used non-machine learning approaches (for traditional statistical algorithms such as logistic regression and naïve Bayes, these were excluded unless explicitly described as machine learning models); Developed algorithms for anything other than patient care (such as medical education, structured data collection, text classification, cohort-specific assessments, or EHR dashboards); Were not primarily focused on oncology; Did not present performance metrics for external validation (either in the current or previous papers). These metrics are required to verify the algorithms' reliability and generalizability beyond the development environment, a key indicator of their readiness for clinical application. Had not assessed clinical utility. This assessment is critical for demonstrating an algorithm's palpable benefit in improving patient care, an essential aspect of its value to the medical community. Were not written in English. This requirement ensures wide accessibility and comprehension of the review's findings within the global scientific community. Did not have full-text access (inaccessible or inexistent), as this limitation prevents an in-depth analysis of the studies’ methodologies and outcomes. Following the search, all identified citations were collated in RIS format, uploaded into EndNote 20.4.1 /2022 (Clarivate Analytics, PA, USA), and deduplicated (first electronically, followed by a manual sweep). A Python script was then used to filter publications by SJR ranking (available at Additional file ). The remaining citations were imported into a spreadsheet, and titles and abstracts were screened for assessment against the inclusion criteria for the review. Next, a full-text inspection of the potentially relevant sources was carried out. Disagreements at each stage of the selection process were resolved through discussion among the authors. The search and study inclusion process results are presented in a Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping review (PRISMA-ScR) flow diagram updated per the PRISMA 2020 statement (see Fig. in Results ). Data were extracted using a data extraction form (available in our protocol – see Additional file ). These data were stored in Excel spreadsheets and included general information and specific details about the participants, concept, context, study methods, and critical findings relevant to the review questions. No modifications were made to the original form. General study characteristics included the first author, title, year of publication, journal, SJR ranking, and whether limitations were reported and any reporting guidelines were followed. The following information was charted from each source: development design (development and validation or validation only), study design (retrospective versus prospective), care type (primary, secondary, tertiary, or quaternary), general and specific cancer type, the study's focus (e.g., survival or diagnosis), best-performing machine learning method(s), task (classification, regression, or both), type of implementation, interface, system classification (e.g., CADx, CDSS), processing time, software, number of institutions in validation, data availability, validation type, data source (i.e., the country from which the data were obtained), population details (age group, number of patients, number of female and male patients, sample type, and sample size), whether independent validation was performed and real-world data were used, which discrimination and calibration metrics were used to evaluate validation performance, and which comparators and metrics were used to assess the models’ clinical utility. The data is presented in tabular and graphical form, accompanied by a narrative summary. All statistical analyses and graphic illustrations were performed using Pandas 1.3.4 and Matplotlib 3.4.3 (Python 3.9.7). Besides discarding publications whose SJR was lower than one, no other evaluations concerning data quality were carried out, which aligns with the JBI's protocol for scoping reviews . Study selection A total of 13 708 records were identified in our search, which was last updated on September 30, 2022. As shown in Fig. , after duplicate removal and filtering by SJR ranking, the titles and abstracts of 4023 citations from Embase, IEEE Xplore, PubMed, Scopus, and Web of Science were assessed. In this stage, 3325 papers were excluded for not being machine learning-based ( n = 1204, 29.9%), using genetic variables or omics ( n = 705, 17.5%), not being externally validated (clearly mentioning performance evaluation by cross-validation or hold-out sampling, n = 587, 14.6%), not being focused on oncology ( n = 534, 13.3%), not regarding patient care or clinical decision-making (e.g., creation of data infrastructures or organizing EHRs, n = 166, 4.1%), not being primary research articles ( n = 101, 2.5%), and not including human patients ( n = 28, 0.7%). This left 698 papers eligible for full-text inspection, of which 62 were excluded for unavailability. From the remaining 636 reports, 274 (43.1%) were discarded for not assessing or quantifying clinical utility, 252 (39.6%) for not being externally validated, 17 (2.7%) for not directly concerning patient care, 13 (2%) for not reporting performance metrics, 13 (2%) for focusing on gene expression or omics, 4 (0.6%) for not containing machine learning models, 2 (0.3%) for not focusing on oncology and 1 (0.2%) secondary research paper. For example, although seemingly relevant, that is, describing external validation and comparison of diagnostic competence against pathologists, other than reporting intraclass correlation coefficients, Yang et al.'s study did not quantify clinicians' performance, which led to its exclusion. No additional relevant documents were found by screening the included studies. Finally, 56 articles were included in this scoping review. The completed form for the included studies can be found in Additional file . Study overview Table summarizes key findings from the 56 studies on patient-centered ML applications in oncology, providing an overview of algorithms, clinical applications, data types, and evaluation methods for clinical utility. The following subsections offer insights into different aspects of the data. Journals, years of publication and reporting guidelines As depicted in Fig. A, the included articles were retrieved from 31 journals with an average SJR (2021) of 2.496, from a minimum of 1.005 ( Scientific Reports ) and a maximum of 7.689 ( Gastroenterology ). Frontiers in Oncology was the most common source ( n = 9, 16.07%, SJR = 1.291), followed by eBioMedicine ( n = 6, 10.71%, SJR = 2.9) and European Radiology ( n = 5, 8.93%, SJR = 1.73) . Eight (25.8%) of these journals were primarily dedicated to methodological issues and computational methods within artificial intelligence (dashed bars in Fig. A), while the remaining twenty-three (74.2%) focused on medical applications and patient-related topics. Concerning the year of publication, although citations since 2014 were screened, only papers from 2018 and onwards met the inclusion criteria. The number of reports increased substantially after 2020, with 23% ( n = 13), 27% ( n = 15), and 43% ( n = 24) of the sources being from 2020, 2021, and 2022, respectively, versus 2% ( n = 1) in 2018 and 5% ( n = 3) in 2019 (Fig. B). While the majority did not adhere to any reporting guidelines ( n = 48, 85.714%), 3 (5.357% ) used TRIPOD , 3 (5.357% ) followed STARD 2015 (commonly used for diagnostic and prognostic studies) , and 2 used CONSORT-AI and STROBE (1 each, 1.786%, and , respectively). Lastly, caveats were not reported for a small percentage of studies (7.14%, n = 4) . Algorithms, cancer types and clinical outcomes The features of the machine learning algorithms found in the included articles are detailed in Table . Sixty-two models were described in the 56 documents, with 55.4% (31/56) of the authors explicitly mentioning which algorithms were used in the paper's abstract. Most developers opted for an ensemble approach ( n = 27, 48.2%), 26 (46.4%) for single models, and three (5.4%) for both . Of the selected studies, 50 (89.3%) were exclusively devoted to classification, 4 to regression (7.1%) , and 2 developed both types of models (3.6%) . All models were supervised except in one study (semi-supervised) , and 50% of the researchers ( n = 28) compared their systems against other ML algorithms. Apart from work developed in , where the model was silently integrated into the patients' EHRs, all models were deployed as standalone systems. Overall, 30 (53.6%) can be classified as CADx, 19 (33.9%) as CDSS, 2 (3.6%) as CADe , and 5 as both CADe and CADx (8.9%) . Regarding interfaces, most tools were desktop-based ( n = 46, 82.1%), and 10 (17.9%) were deployed as web-based applications . All websites were reported, 43 articles (76.79%) disclosed which software was used, and codes were provided for 11 models (19.6%) . Most studies were deep-learning based ( n = 36, 64.3%). From these, the most frequently reported models were Convolutional Neural Networks (CNNs), either alone (29/36, 80.55%), coupled with a Recurrent Neural Network (RNN, 3/36, 8.34%) , or with Logistic Regression (LR), a shallow ANN, Gradient Boosting (GB), a Support Vector Machine (SVM), and Random Forest (RF, 1/36, 2.78%) . Specific CNN architectures were reported for approximately 76% of the articles (25/33), which, as shown in Fig. , primarily consisted of ResNet- ( n = 9, 36%) and DenseNet-based frameworks ( n = 8, 32%), used individually or in conjunction. To overcome data scarcity, transfer learning was used in 16 of the 33 CNN-based articles 48.5%), which involves pre-training the network on a specific problem and transferring that base knowledge to a new, related task (see Table : pre-trained in column General Focus and Models ). Besides CNNs, other DL algorithms were described in four articles . Multilayer Perceptrons (MLPs) were used in three (5.56%) , two of which applied a DeepSurv architecture, a deep Cox proportional hazards feed-forward neural network . The last (2.78%) involved a neural multitask logistic regression model (N-MTLR) . The remaining documents ( n = 20, 35.7%) described a non-deep-learning-based workflow encompassing fifteen unique algorithms applied in twenty-eight configurations. From these, boosting-based techniques were the most widely reported, consisting of eXtreme Gradient Boosting (XGBoost, 6/28, 21.43%) , a Light Gradient Boosting Machine (LightGBM, 1/28, 3.57%) , LogitBoost (1/28, 3.57%) , Adaptive Boosting (AdaBoost, 1/28, 3.57%) , and Gradient-Boosted Decision Trees (GBDT, 2/28, 7.14%) . Other decision tree designs were also used, including RF (6/28, 21.43%) and extremely randomized trees (ExtraTrees, 1/28, 3.57%) . The third most reported group of algorithms were SVMs , a Support Vector Classifier (SVC) , and a Quadratic SVM (4/28, 14.28%), followed by shallow ANNs (2/28, 7.14%) and LR (1/28, 3.57%) . Lastly, Mixture Discriminant Analysis (MDA), k-nearest Neighbors (kNNs), and naïve Bayes (NB) were also found, all used in the same article (total of 3/28, 10.71%) . Regarding general cancer types, the selected papers can be broadly divided into two categories: those concentrating on primary tumors and those mainly examining metastasized (secondary) cancers. Most articles focused on primary tumors (51/56, 91.1%), although four also included metastases . These cancers can be further branched into the specific system where the malignancy was formed: (i) central nervous system (CNS), including the brain (3/51, 5.88%) ; (ii) digestive system, encompassing colorectal (7/51, 13.73%) , esophageal (3/51, 5.88%), gastric (5/51, 9.8%) , and liver cancers (2/51, 3.92%) ; (iii) endocrine system, involving cancers of the pancreas (2/51, 3.92%) and thymus (1/51, 1.96%) ; (iv) genitourinary system, consisting of bladder (1/51, 1.96%) , cervical (1/51, 1.96%) , prostate (2/51, 3.92%) , and endometrial (2/51, 3.92%) cancers; (v) integumentary system, with tumors of the breast (4/51, 7.84%) and skin (2/51, 3.92%); (vi) respiratory system, studying neoplasms of the larynx (1/51, 1.96%) , lung (10/51, 19.61%) , mesothelium (1/51, 1.96%) , and nasopharynx (1/51, 1.96%) ; and (vii) the skeletal system, comprising the bones (4/51, 7.84%) . In addition, five papers analyzed metastatic cancers (5/56, 8.9%), which can also be bifurcated into malignancies spread to nodes or organs. The former includes solid metastatic breast, lung, and gastrointestinal and genitourinary tract tumors , bone metastases in kidney cancer patients , and liver metastases from colorectal cancers . The latter encompasses thyroid cancer spread to lymph nodes and sentinel lymph node metastasis from primary breast lesions . Seventy-six cancer-related goals were addressed in the 56 documents, with an average of one task performed per paper and a maximum of three . These included the development or improvement of systems for: (i) diagnosis alone ( n = 28, 50%) or combined with detection ( n = 5, 8.93%) or prognosis ( n = 1, 1.79%) ; (ii) detection by itself ( n = 2, 3.58%) or coupled with outcome prediction ( n = 1, 1.79%) ; and (iii) outcome prediction, including prognosis ( n = 16, 28.58%) ; and risk stratification ( n = 3, 5.36%) . Finally, fifteen studies resorted to explainable AI (XAI) to increase the transparency behind the models' decisions. Unlike black-box methods, whose reasoning is indecipherable, XAI allows the creation of interpretable models to determine how each prediction was reached and which clinical predictors bore the most weight. Three packages were used for this purpose: (i) SHapley Additive exPlanations (SHAP), which can be employed in any ML algorithm ( n = 6, 40%) ; and (ii) Class Activation Mapping (CAM, n = 1, 6.67%) and Gradient-weighted CAM (Grad-CAM, n = 8, 53.33%) , explicitly developed for CNNs. Clinical inputs and populations According to the clinical variables used as input, the models validated in the 56 studies can be divided into three types: image-based (including video, n = 37, 66.1%), text-based ( n = 10, 17.9%), and mixed, using both clinical modalities ( n = 9, 16.1%). Image-based Studies A total of 335 085 high-resolution images from 112 538 patients (102 117 female, 8 215 male ) were used for classification in 36 of the 37 image-based studies and for classification (recurrence) and regression (recurrence-free survival) in the last study . Except for one paper including both pediatric and adult patients (unknown age proportion, 175 female, 116 male) and two other articles not listing the patients’ age group (698 in , unknown in , unidentified male–female ratio in both), all studies consisted of adults (111 469 patients, 101 942 women, 8 099 men). Eight studies (21.6%) extracted radiomic features from the retrieved images . The studies encompassed X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography – Computed Tomography (PET-CT) scans, endoscopic images and videos, photographs, ultrasounds, histological slides, and whole-slide images (WSI). Besides digital pictures, which are limited to the surface, these imaging techniques capture the body's internal structures. However, they differ in the way they create images and the type of information they provide. X-rays use and expose the patient to ionizing radiation to create scans . Although time- and cost-effective, these do not provide as much detail as CT or MRI scans. In this review, two studies used radiographic images (2/37, 5.4%) to: (i) classify pathologically-confirmed primary bone tumors in children and adults (639 radiographs, 175 female, 116 male) ; and (ii) for breast cancer screening in adult women ( n = 1, 213 694 X-rays, 92 585 women) . CT scans combine X-rays from different angles to create high-quality, three-dimensional images. Nevertheless, since they are generated from controlled motions of X-rays, CTs are still unfit for extracting molecular information . Furthermore, these scans subject the patient to higher radiation levels than X-rays and may require contrast agents depending on the adopted modality – contrast-enhanced CTs (CECTs) versus non-contrast CTs (NECTs). CT scans were commonly collected variables in the selected articles (8/37, 21.6%), amounting to 7 540 images from: (i) the lungs ( n = 4, 2 323 nodules, 2 113 patients) ; (ii) gastric cancers ( n = 2, 1 129 images, 352 women, 777 men) ; (iii) cervical lymph nodes ( n = 1, 3 838 images, 698 patients of unknown gender) ; and (iv) hepatic metastasis from colorectal cancer ( n = 1, 250 lesions, 31 women, 54 men) . MRI scans do not depend on radiation and use a strong magnetic field and radio waves to create detailed images. This type of imaging can be separated into two subtypes: conventional and advanced . Conventional MRI (cMRI) sequences include standard MRI protocols commonly used in clinical practice, such as (i) T1-weighted: used to identify structural abnormalities; (ii) Axial fluid-attenuated inversion recovery MRI (FLAIR), applied to identify abnormalities that affect the tissues' water content; and (iii) T2-weighted: also appropriate to assess irregularities in water content. Advanced MRI (advMRI) techniques generate deeper information regarding the tissue's function, structure, and metabolic processes, including: (i) multiparametric MRI (mpMRI), which combine several other MRI sequences to enrich its output; (ii) axial diffusion-weighted (DWI) MRI, which measure the movement of water molecules in tissues; (iii) Vascular architecture mapping (VAM) MRI, providing information about the tissue's blood vessels; (iv) Gradient echo dynamic susceptibility contrast (DSC) MRI, used to measure blood movement; (v) Quantitative blood-oxygenation-level-dependent (qBOLD) MRI, able to measure the oxygen content in the blood; (vi) General Electric-Dynamic Susceptibility Contrast (GE-DSC) MRI, which resorts to a contrast agent to measure blood flow; and (vii) Magnetic resonance spectroscopy (MRS), which calculate the levels of certain chemicals and metabolites in the tissues. Although some types of MRIs – such as MR spectroscopy and diffusion-weighted imaging – allow assessing molecular details without contrasts, most are better equipped to analyze gross internal structures and are more expensive than CTs and X-rays . MRI scans were also frequently used as input for the models, with 64 941 combined images from 8 studies (21.6%), including (i) the brain ( n = 3, 64 459 lesions, 623 women, 461 men) ; (ii) the prostate ( n = 2, 262 nodules, 300 men) ; (iii) colorectal malignancies ( n = 2, 154 images, 54 women, 64 men) ; and (iv) bones and cartilages ( n = 1, 65 scans, 34 women, 31 men) . PET scans, which are also radiation-free, allow for examining the internal body structure and underlying molecular tissues. However, these are extremely expensive, usually unavailable in routine practice, and due to their low spatial resolution, require pairing with a second modality, such as CTs and MRIs . In this review, one study (2.7%) used PET-CT scans to examine atypical cartilaginous tumors and appendicular chondrosarcomas (36 scans, 23 women, 13 men) . Similarly to X-rays, ultrasounds – which use high-frequency sound waves to create images – provide an inexpensive method to inspect organ structures without detailing underlying molecular information, with the upside of not involving radiation . Ultrasonographic imaging was mentioned in 2 articles ( n = 2, 5.4%, 328), which studied breast cancers (116 ultrasounds, 107 women) . Eight reports describe images captured with standard endoscopes ( n = 8, 24.3%, 3681 items), which cannot capture molecular features. Four studies used colonoscopic lesions from the colon and rectum (995 images, 105 women, 224 men) . Four studies analyzed endoscopic pictures of the esophagus ( n = 2, 260 images, 260 patients of unknown gender) , the larynx ( n = 1, 1 176 images, unknown number of patients) , and the nasopharynx ( n = 1, 1 430 images, 124 women, 231 men) . Lastly, one study examined endoscopic videos from intramucosal gastric cancer patients (54 videos, 38 women, 16 men) . Two studies used advanced endoscopes. One involved endoscopic ultrasonography (EUS), a technique that combines endoscopy and ultrasonography to gather gastrointestinal images ( n = 1, 2.7%, 212 ultrasounds, 38 women, 31 men) . The other resorted to endocytoscopy, a relatively new high-magnification imaging approach that allows tissue analysis at a cellular level, to collect 100 colorectal images from 89 patients ( n = 1, 2.7%, 26 women, 63 men) . A histological image is a high-resolution, microscopic image of a tissue slide after it's been processed with one or more stains to reveal its composition . This method allows distinguishing between different histological cancer subtypes but involves a long preparation time and offers a limited depth of view. One paper used hematoxylin-and-eosin (H&E)-stained histological images to study endometrium hyperplasia and intraepithelial neoplasia ( n = 1, 2.7%, 1 631 slides, 102 women) . Whole-slide images (WSIs) are virtual representations of a tissue section scanned at high resolution and magnification. WSIs are created by scanning stained histological slides and usually combine and magnify multiple slides using specialized software . This technique allows thorough tissue examination at cellular and sub-cellular levels, but it is still cost-, storage- and technically heavy. WSIs were used to feed the models in three studies (8.1%, 3 315 images), using 30 × or 40 × magnification. Two included H&E stained slides of the liver ( n = 1, 80 slides, 24 women, 56 men) and the mesothelium ( n = 1, 39 images, 39 patients of unreported gender) . One was composed of stained slides (unknown stain) for the cervical screening of women without any known conditions and with the Human papillomavirus (HPV) ( n = 1, 1565 images and women) . Finally, 46 962 digital photographs (captured with a camera) were analyzed across two documents (5.4%). Both inspected skin malignancies ( n = 2, 10 602 patients). Detailed information regarding the samples, type of CTs, MRIs, and endoscopes used in the image-based studies, as well as population details and counts (age group, total patients, female, and male), is itemized in Table . Text-based Studies The populations and specific clinical variables used in each text-based study are compiled in Table . Clinical data from 6 803 patients (2 772 women, 4 031 men, 7 861 encounters) was collected for validation across ten papers . Apart from one work including senior citizens , all studies consisted of adult patients (6 644 subjects, 2 701 women, 3 943 men). An average of 17 clinical variables was used per study (range = 6 – 31 ), encompassing information on demographics, tumoral values, and laboratory test results. The machine learning models used in 6 of the articles (60%) were exclusively developed for classification (1 960 women, 3 097 men) , while 4 (40%) solely concerned regression (812 women, 934 men) . In the four regression-based articles, the developed prognostic models assessed (i) patients with a single lesion of primary stage I to IV esophageal adenocarcinoma or squamous cell carcinoma ( n = 1, 150 women, 350 men) ; (ii) patients with pathologically confirmed and resected intrahepatic cholangiocarcinoma (12 women, 30 men) ; (iii) patients with stage I to III non-small cell lung cancer (642 women, 540 men) ; and (iv) patients in palliative care with unresectable advanced pancreatic ductal adenocarcinoma with liver metastases (8 women, 14 men) . The six classification papers included: (i) seniors with stage I to III non-small cell lung cancer treated with curative-intent radiotherapy (159 individuals, 71 women, 88 men) ; (ii) bone metastasis in kidney cancer patients with complete survival data (323 women, 640 men) ; (iii) women with primary breast cancer diagnosed by pathological examination (150 women) ; (iv) patients with primary colorectal cancer with survival-related data who underwent surgery (1 572 patients, 607 female, 965 male) ; (v) patients with confirmed stage III non-small cell lung cancer (39 women, 133 men) ; and (vi) patients with solid metastatic tumors for several types of cancer with and without alterations in treatment in an outpatient setting (3 099 encounters, 2 041 individuals, 770 women, 1 271 men) . Mixed Studies An average of 9 clinical variables (range = 3 – 17 ), 784 images, and 720 patients (range = 44 – 5 493 for both) were used in the nine mixed studies, whose information is highlighted in Table . These papers combined patients’ demographics, cancer-specific data, laboratory results, and imaging features extracted from different modalities for cancer-specific populations (7 053 images, 6 482 patients, 3 009 women, 3 478 men). Radiomics approaches were used in three studies . Six reports included CT images to study: (i) patients who underwent curative-intent resection for pancreatic ductal adenocarcinoma ( n = 1, 53 images, 27 women, 26 men) ; (ii) patients with benign and malignant pulmonary ground-glass nodules with less than 30 mm ( n = 1, 63 images, 39 women, 22 men) ; (iii) individuals with multiple lungs nodes in a post-operative setting ( n = 1, 200 images, 51 women, 27 men) ; (iv) lung cancer patients with an available baseline radiograph ( n = 1, 5 493 patients and images, 2456 women, 3037 men) ; (v) patients with muscle-invasive bladder cancer who underwent surgery ( n = 1, 75 images, 13 women, 62 men) ; and (vi) adults with pathologically confirmed thymomas and thymic carcinomas ( n = 1, 76 preoperative scans, 33 women, 48 men) . Additionally, three studies used other types of scans. One work paired breast-specific data with features derived from three types of MRI scans for women with endometrial lesions and complete clinical data (44 images, 44 women) . One paper combined patients’ age, sex, tumor type, location, and radiomic features extracted from X-rays to analyze primary bone tumors (40 women, 56 men) . Finally, one study evaluated survival- and gross-tumor-related data in conjunction with H&E slides magnified 30 times (whole-slide images) to estimate outcomes for patients diagnosed with gastric cancer (175 images, 91 patients, 60 female, 31 male) . Except for the models developed in this study, where the first used only WSIs for classification and the second used these images and clinical data for prognostication (regression), all algorithms were classifiers. Validation design, clinical settings and performance metrics Information concerning institutional, study, and validation designs, care types, datasets, clinical settings, and the number of institutions involved in validation in the selected documents is illustrated in Table . Model development and validation were performed simultaneously in most studies ( n = 50, 87.5%), while 4 (7.14%) evaluated external validity separately, and 3 (5.36%) entailed model updating and validation. Of the 56 documents included in this review, 44 (78.57%) directly reference external validation in the abstract, 10 (17.86%) indirectly mention it, and 2 (3.57%) omit this information. Overall, 74 medical datasets were used for external validation across the 56 studies, averaging 1.3 per paper (range = 1—8). All studies used real-world data acquired prospectively or collected from the patients' EHRs and imaging archiving platforms. Except for three articles using both standard and uncommon types of MRI scans and one using endocytoscopy (whose use is still growing) , all studies used text- and image-based data routinely collected in clinical practice. However, only nine reports describe external validation in clinically realistic scenarios , and solely two systems are currently implemented in practice . The papers involved several cancer-related settings, including secondary ( n = 1, 2%), tertiary ( n = 34, 61%), and quaternary (12, 21%) oncology care. However, 6 (11%) studies did not report from which centers data were retrieved, and 3 (5%) used databases without this information. Among the collected studies, 49 (87.5%) were conducted retrospectively, 3 (5.36%) were prospective, 4 (7.15%) were mixed: one performed internal validation prospectively and external validation retrospectively , one proceeded inversely , and two used both retrospective and prospective cohorts . Only one report used randomized data . Regarding validation design, 31 (55.357%) studies followed a multi-institutional approach, 14 (25%) collected information from a single center, 1 (1.786%) only used public databases, 2 (3,572%) used public multi-institutional databases, and 8 (14,286%) used both types of sources. For the multi-institutional studies (including databases), the average number of facilities used for validation was 3, with a maximum of 33 . One study did not report the number of institutions involved . The following freely available data sources were used: (i) the Surveillance, Epidemiology, and End Results (SEER) database, which covers population-based cancer registries of approximately 47.8% of the United States Population ; (ii) The Cancer Genome Atlas (TCGA, from the USA), which molecularly characterizes over 20,000 primary cancers, and contains whole-slide images ; (iii) The Cancer Imaging Archive, which hosts a large number of medical images for various types of cancer ; (iv) the Edinburgh dataset, containing data from the University of Edinburgh (Scottland, United Kingdom) ; (v) the Prostate, Lung, Colorectal, and Ovarian (PLCO) randomized trial sponsored by the by the National Cancer Institute (NCI), designed to evaluate the impact of cancer screening on mortality rates, as well as to assess the potential risks and benefits associated with screening ; (vi) the National Lung Screening Trial (NLST), a randomized controlled trial also supported by the NCI that aimed to evaluate the impact of using low-dose helical CT scans on patient mortality ; (vii) the PROSTATEx dataset, which contains a retrospective set of prostate MRI studies ; (viii) the PICTURE dataset, containing data from a single-center trial, and intended to evaluate the diagnostic accuracy of multiparametric magnetic resonance imaging (mpMRI) in men with prostate lesions ; and (ix) the National Human Genetic Resources Sharing Service Platform (NHGRP), for which we could not find any details . In two studies, models were trained using data from multiple countries. One developed their model using patients from three Chinese institutions and one center from the United States of America (USA) and validated it on a Chinese dataset ( n = 1, 1.8%) . The other gathered data from a Chinese institution and TCGA and validated their model on images from NHGRP . Additionally, one document did not report which countries were involved in their model’s development or validation . All other authors developed their model on data from a single country. These included China ( n = 19, 33.7%), the USA ( n = 12, 21.4%), South Korea ( n = 9, 16.1%), Italy and Germany (3 each, 5.4%), Japan and the Netherlands (2 each, 3.6%), and the United Kingdom (UK), Canada, and Austria (1 each, 1.8%). Besides the two abovementioned papers , twelve other studies performed international validation. Of these, six included ethnically different sources. Two authors trained their model with data from South Korea: one validated it on South Korean and American datasets , and the other validated it on a South Korean dataset and the Edinburgh dataset (UK) . Additionally, five reports mention training their model on the SEER database (USA), with four validating it with Chinese patients and one with South Korean patients . For the five remaining studies, patients with the same ethnicity were included: (i) one was developed with the NLST trial dataset (USA) and validated on data from the UK ; (ii) one was trained with data from TCGA (USA) and validated on an institution from the UK ; (iii) one used data from Italy for training and patients from The Netherlands for validation ; (iii) one trained their model on the PROSTATEx dataset (from The Netherlands) and validated it on the PICTURE dataset (from the UK) ; and (iv) one used a Chinese dataset for training and Chinese and South Korean patients for validation . Regarding validation types, 12 studies (21.48%) were limited to temporal validation from a single institution, which cannot be interpreted as a fully independent validation . Five other studies also only temporally validated their model. However, two used a multi-institutional approach (3.58%) , two (3.58%) used different data acquisition designs (retrospective internal validation and prospective external validation) , and one evaluated performance for patients at different treatment stages (1.78%) . Nine studies (16,08%) only validated their model geographically, seven within the same country , one internationally , and one with internationally and ethnically different patients . Twenty-nine reports (51.8%) included both temporal and geographical validation. Sixteen (28.57%) used local data, one evaluated temporally and geographically different patients from the same country with images captured using various scanners , and one (1.79%) used national data and mixed data acquisition (prospective internal validation and retrospective external validation) . Lastly, one study that did not report data sources validated their model on different types of computed tomography (CT) scanners . The external datasets were used to evaluate the models’ generalizability to populations differing – geographically, temporally, or both – from the development cohort. The performance metrics reported in the articles can be branched into three categories: discrimination, calibration, and processing time. For classification models, an average of 5 metrics were used to assess discrimination, up to a maximum of seven (range = 1 – 7). These consisted of (i) sensitivity, reported in 48 papers; (ii) area under the receiver operating characteristic (ROC) curve (AUC), calculated in 43 studies; (iii) specificity, used in 42 articles; (iv) accuracy, presented in 35 documents; (v and vi) positive and negative predictive values (PPV and NPV), computed in 29 and 19 reports, respectively; (vii) F1-score, considered in 13 papers; (viii) C-index, used in 2 articles ; (ix) false positive rate, reported in two papers ; (x) area under the alternative free-response ROC curve (AUAFROC) , calculated for one model; (xi) jackknife alternative free-response ROC (JAFROC), also computed for one algorithm ; and (xii) Softspot (Sos) and Sweetspot (Sws) flags, both used in the same two papers . However, decision thresholds were only disclosed for half of the articles (26/52, 50%), and only three papers presented results for different cut-off values/settings . Likewise, 39 classification studies did not assess calibration. When evaluated (13/52, 25%), calibration was illustrated graphically in five studies (9.62%) , via Brier Score in three documents (5.77%) , using both approaches in four papers (7.69%) , and with mean absolute error (MAE) in one report . Lastly, the models’ processing time was also seldomly revealed, with only seven studies reporting it . For the regression-based algorithms, discriminative performance was assessed via C-index . Regarding calibration, the model’s Brier Score was presented in one study , calibration plots in two , both metrics in one , and none in two . The models’ processing time and decision thresholds were not reported in any of these studies. Clinical utility From the selected studies, the majority ( n = 50, 89.29%) explicitly mentions the assessment of the models' clinical utility, that is, its relevance to clinicians and patient outcomes, in the paper's abstract. However, one only refers to it indirectly (1.79%) , and the remaining five (8.93%) do not state this aspect in their summaries . Two approaches were used to assess the models’ utility: comparison against clinician performance, adopted in most studies (40/56, 71.4%), and benchmarking against established clinical tools (15/56, 26.8%). Additionally, one study used both: retrospective comparisons were performed against routine clinical scores, while prospective assessments involved clinicians (1/56, 1.8%) . Comparison Against Clinicians Four hundred-ninety-nine medical professionals of varying expertise were involved in these studies, with an average of 12 clinicians compared against each model (range = 1 – 109 ). These included endoscopists ( n = 204), oncologists ( n = 77), radiologists ( n = 76), general physicians ( n = 71), dermatologists ( n = 44), pathologists ( n = 21), ophthalmologists ( n = 3), and thoracic surgeons ( n = 3). A subset of 113 115 patients (102 178 female, 9 619 male) was used for these assessments, and identical performance metrics as those documented for external validation were observed, plus time until diagnosis. Specific clinicians’ years of experience were reported in 20 papers (48.8%), ranks (without years) in 11 (26.8%), and no information concerning expertise in 10 (24.4%). The 41 classification studies encompassing model comparison against clinicians can be divided into two designs: with and without the model and independent evaluation of the models and the clinicians. The most commonly adopted technique was separately assessing model and clinician performance and comparing it posteriorly ( n = 30, 73.2%). Four hundred-one clinicians (μ = 15 per report, range = 1 – 109) and 109 720 patients (μ = 3 657 per paper, 100 965 female, 8 203 male ) were involved in these papers, and model-clinician performance was compared for detection and diagnostic capabilities. An average of 4 performance metrics (range = 1 – 7 ) were computed per paper, with sensitivity being the most calculated ( n = 23), followed by specificity ( n = 18) and accuracy ( n = 15), AUC ( n = 11), PPV ( n = 11), NPV ( n = 7), F1-score ( n = 3) , false positive rate ( n = 2) , Sweetspot and Softsoft flags ( n = 2) , diagnostic time ( n = 1) , and AUAFROC ( n = 1) , and JAFROC ( n = 1) . The second approach involved comparing clinician performance with and without the assistance of the artificially intelligent systems developed by the authors ( n = 11, 26.8%). The eleven studies employing this method comprised 92 clinicians (μ = 8, minimum = 1, maximum = 20 ) and 3 337 patients (μ = 370, 1 223 female, 1 416 male ). Similarly to the previous technique, an average of 4 performance metrics were used per paper (range = 1 – 6 ), including sensitivity ( n = 9), specificity ( n = 8), accuracy ( n = 8), PPV ( n = 6), NPV ( n = 5), AUC ( n = 2) , mean diagnostic time ( n = 2) , and error rate ( n = 1) . Comparison Against Standard/Established Clinical Tools In sixteen studies, assessing the usefulness of the models involved comparing their performance against well-established and routinely used clinical tools. In total, 11 659 patients (μ = 777 per paper, 4 521 female, 5 694 male ) were encompassed in these assessments, and twelve standard tools were used for comparisons. These included: (i) the 7th and 8th editions of the Tumor, Node, and Metastasis (TNM) staging system; (ii) the Brock University Model; (iii) the Fracture Risk Assessment Tool (FRAX); (iv) the Liver Cancer Study Group of Japan (LCSGJ); (v) the Mayo clinic model; (vi) the modified Glasgow Prognostic Score (mGPS); (vii) the Osteoporosis Self-Assessment Tool for Asians (OSTA); (viii) the second version of the Prostate Imaging Reporting and Data System (PI-RADS v2); (ix) the Peking University (PKU) model; (x) the PLCOm2012 model; (iv) the Response Evaluation Criteria in Solid Tumors (RECIST); (xi) the Veterans Affairs (VA) model; and (xii) the World Health Organization (WHO) performance status. Except for one study , all papers explicitly mention comparisons against these tools in the abstract. The TNM system, created by the American Joint Committee on Cancer (AJCC), is globally used in routine clinical procedures. It categorizes cancer progression and guides subsequent treatment decisions depending on (i) the size and extent of the primary tumor (T), (ii) if it has spread to nearby lymph nodes (N), and (iii) if it has metastasized to distant organs (M) . In this review, two text-based classification studies compared their models against the 7th edition of this staging system (TNM-7): one juxtaposed diagnostic and prognostic (3-year overall survival) predictions for bone metastasis in kidney cancer patients (323 women, 640 men) , while the other compared 1–10-year postoperative survival predictions for patients with colorectal cancer (607 women, 965 men) . Similarly, seven papers resorted to the 8th edition of AJCC TMN (TNM-8), its revised and updated version. On the one hand, in four articles, the models were only compared against this system. Two analyzed their text- and regression-based models to predict cancer-specific survival for esophageal (500 patients, 150 women, 350 men) and lung tumors (1 182 individuals, 642 female, 540 male) . The other two concerned the evaluation of classification models. Using preoperative images and descriptive data, one compared 2-year overall survival and 1-year recurrence-free survival predictions for patients with pancreatic cancer (27 female, 26 male) . The other compared risk stratification performance for overall survival for lung cancer patients (39 women, 133 men) between their model and the TMN-8 system using only text-based data . On the other hand, in three text-based studies, models were compared against TNM-8 and other tools. One paper also contrasted model performance for recurrence, recurrence-free survival, and overall survival for lung cancer patients (71 women, 88 men) with the WHO performance status, often used in oncology to determine patients' overall health status, prognosis, and the ability to tolerate treatment . This scaling system ranges from 0 to 4, where 0 represents no symptoms and pre-disease performance, and 4 translates to total disability. In the second article, predictions of overall postoperative survival were benchmarked against TNM-8 and LCSGJ (42 liver cancer patients, 12 women, 30 men) . LCSGJ is a group of Japanese medical professionals specializing in diagnosing and treating liver cancer, recognized as a leading authority in this cancer research field. Lastly, the third study describes the development of three risk models for breast cancer patients (150 women) : (i) fracture, whose predictions were contrasted with those generated by FRAX; (ii) osteoporosis, compared against and FRAX and OSTA; (iii) and survival, benchmarked against TNM-8. FRAX is a web-based tool designed to stratify 10-year bone fracture risk, and OSTA assesses the risk of osteoporosis in Asian populations . The Brock University (also known as PanCan) model is a logistic regression model devised to assist in risk stratification for lung cancer. It is recommended in the British Thoracic Society guideline as a tool to decide if nodules measuring 8 mm or more in maximum diameter should be assessed further with PET-CT . Here, it was applied in one of the selected papers to compare predictions of malignancy risk for lung cancer from CECT and NECT scans (1 397 images, 1187 patients, unknown gender proportion) . In addition to the Brock Model, comparisons in a second paper (978 CTs, 493 patients, 297 women, 196 men) were also performed against three other tools: (i) the Mayo model, which the Mayo Clinic developed to assess cancer prognosis and predict patient outcomes; (ii) the PKU model, created by the Peking University; and (iii) the VA model, which includes a comprehensive cancer care system that aims to provide high-quality, evidence-based care to veterans with cancer . The mGPS scale is a validated scoring system formulated to assess the prognosis of patients with advanced or metastatic cancer based on nutritional and inflammatory markers . In this review, it was used to establish clinical utility for a text-based classification model developed to predict overall survival for patients with unresectable pancreatic tumors (22 patients, 8 women, 14 men) . PI-RADS is a standardized system for interpreting and reporting findings from prostate MRI scans, created to guide clinical decision-making in diagnosing and treating prostate cancer. In this context, it was contrasted against a model developed to stratify low- and high-risk patients (39 and 14 men, respectively) . PLCOm2012 is a validated risk score that uses logistic regression to predict the probability of lung cancer occurrence within six years based on demographic and clinical information . It was the chosen comparator in a study predicting 12-year lung cancer incidence using low-dose CT images and patients’ age, sex, and smoking status (5493 images and patients, 2456 women, 3037 men) . Finally, RECIST is a set of guidelines used to evaluate the response of solid tumors to treatment in clinical trials and clinical practice. It was compared against two classification models: one aimed at detecting pathological downstaging in advanced gastric cancer patients from CECT images (86 patients and images, 23 women, 27 men) ; the other was designed to predict pathological tumor regression grade response to neoadjuvant chemotherapy in patients with colorectal liver metastases from MRI scans (61 images, 25 patients, 13 female, 12 male) . A few performance metrics were reported for the comparisons between the models developed in the selected papers and routinely used clinical tools, with an average of 3 metrics reported per document (range = 1 – 6). Here, the most frequently calculated metrics were AUC ( n = 11) and sensitivity ( n = 8), but PPV ( n = 5), C-index ( n = 4), specificity ( n = 4), accuracy ( n = 3), NPV ( n = 3), Brier Score ( n = 2) and F1-score ( n = 1) were also used in the evaluations. Primary tumors Fifty-one papers (91.1%) describe models developed for primary tumor-related assessments. These include cancers of the CNS (brain ), digestive (colorectal , esophageal , gastric , and hepatic malignancies), endocrine (pancreas and thymus ), genitourinary (bladder , cervix , prostate , and uterus ), and integumentary (breast and skin ) systems, respiratory system and associated tissues (larynx , lung , mesothelium , and nasopharynx ), and the skeleton (cartilages and bones ). Central nervous system Three retrospective studies were developed to diagnose brain cancers using MRI scans, amounting to 1 084 patients and 64 459 images, resulting in an average sensitivity of 81.97% and specificity of 91.63 (Table ) . The first involved the following conditions: acoustic neuroma, pituitary tumor, epidermoid cyst, meningioma, paraganglioma, craniopharyngioma, glioma, hemangioblastoma, metastatic tumor, germ cell tumor, medulloblastoma, chordoma, lymphomas, choroid plexus, papilloma, gangliocytoma, dysembryoplastic neuroepithelial tumor, and hemangiopericytoma . The CNN-based model was trained on images from 37 871 patients and externally validated using 64 414 T1-weighted, T2-weighted, and T1c MRI scans from 1039 subjects (600 female, 349 male) from three institutions. Its diagnostic performance was compared against nine neuroradiologists (5 to 20 years of experience) to assess clinical utility. This CNN classified brain tumors with high accuracy, sensitivity, and specificity, performing particularly well in identifying gliomas, which are difficult to diagnose using traditional imaging methods. When aided by the model, the neuroradiologists' accuracy increased by 18.9%, which was still lower than the model alone. AI assistance also boosted the neuroradiologists' sensitivity, specificity, and PPV. However, only three types of scans were considered, training data was obtained from a single center, and few rare tumors were included. In the second paper, the authors explored the combination of 9 different ML models – NB, logistic regression, SVM with a polynomial kernel, kNN (k = 3), DT, MLP, RF, AdaBoost, and bootstrap aggregating – to distinguish between different types of brain tumors (glioblastoma, anaplastic glioma, meningioma, primary central nervous system lymphoma, and brain metastasis) . MRI techniques were analyzed in a combination of 135 classifiers and radiomics: cMRI, advMRI, phyMRI, cMRI + phyMRI, and advMRI + phyMRI. A dataset of 167 patients was used for training, and temporal validation was performed on 20 subjects. Physiological MRI scans (phyMRI), named radiophysiomics, achieved the best results using AdaBoost with cMRI and phyMRI and RF with phyMRI. Both models surpassed the radiologists in AUC and F1-score but were outperformed in sensitivity and specificity. The AdaBoost model also had a higher PPV than the clinicians. However, this was a single-center, retrospective study, and the application and tuning of the models were performed manually. The third study evaluated the usefulness of preoperative contrast-enhanced T1- and T2-weighted MRI in differentiating low-grade gliomas (LGG) from glioblastomas (GBM) . The authors trained a radiomics-based RF classifier on 142 patients from 8 American centers and externally validated it on 25 patients from another institution (all from The Cancer Imaging Archive). The results showed that the machine learning algorithm was highly accurate in differentiating between GBM and LGG based on preoperative contrast-enhanced MRI scans, surpassing two neuroradiologists (15 and 1 year of experience) and a radiologist (3 years of experience). However, few patients from a public database were collected, possibly resulting in selection bias (non-random selection). Digestive system Malignancies of the digestive system – highlighted in Table – were the most comprehensively studied (17/56, 30.4%), encompassing colorectal ( n = 7, 41.2%), esophageal ( n = 3, 17.6%), gastric ( n = 5, 29.4%), and liver ( n = 2, 11.8%) cancers. Colorectal Cancer Three sets of articles addressed colorectal cancers (7 papers). The goal of the first set, consisting of four multi-institutional retrospective studies, was its diagnosis, averaging a sensitivity of 77.3% and a specificity of 93.2% for tests on 995 images from different sources . The authors in developed an ensemble of three CNNs (Inception-v3, ResNet-50, and DenseNet-161) to predict the histology of colorectal neoplasms based on white light colonoscopic images. The ensemble model transferred knowledge from digital photography and learned with colonoscopic images to classify the images into one of 4 different pathologic categories: normal (healthy), adenoma with low-grade dysplasia (A-LGD), adenoma with high-grade dysplasia (A-HGD), and adenocarcinoma. The system's diagnostic performance was compared against four experts (more than five years of experience) and six trainees (less than two years). In the external validation dataset (400 images, 100 of each type), the CNN-CAD model achieved high accuracy in predicting the histology of the lesions. Compared to endoscopists, the model's performance was slightly better than the experts' and significantly outperformed the trainees. In addition, the authors used Grad-CAM to create a heatmap highlighting the regions of the input image that were most relevant to the network's decision. However, only one image per polyp was used; consequently, tumors that cannot be contained within a single image were neglected. The second work concerns the external validation and clinical utility assessment of EndoBRAIN, an AI-assisted system to classify colorectal polyps into malignant or non-malignant. EndoBRAIN was trained with 69 142 endocytoscopic images from patients with colorectal polyps from five academic centers in Japan. Its clinical validity had previously been confirmed in a single-center prospective study. However, since its implementation depends on governmental regulatory approval, the current study compared EndoBRAIN's diagnostic performance against 30 endoscopists (20 trainees, 10 experts) using stained and narrow-band endocytoscopic images in a web-based trial. The authors found their CADx tool accurately differentiated neoplastic from non-neoplastic lesions, outperforming all endoscopists for stained images, achieving similar performance in narrow-band images, and being accepted for clinical use. The third diagnostic model concerns the development of a deep learning model to predict the revised Vienna Classification in colonoscopy, which categorizes colorectal neoplasms into different levels of malignancy using standard endoscopic colonoscopy images . Several CNN architectures were compared, namely AlexNet, ResNet152, and EfficientNet-B8, with ResNet152 being chosen as the prediction model due to its higher accuracy and fastest inference time. The model was trained using 56,872 colonoscopy images (6775 lesions) and validated on 255 images (128 lesions) from 7 external institutions in Japan. The authors also compared diagnostic performance against endoscopists (five novices, three fellows, and four experts). The AI system’s sensitivity and specificity exceeded that of all endoscopists. Nevertheless, the model cannot discriminate between high-grade dysplasia and invasive cancer (categories 4 and 5 of the revised Vienna Classification), and only binary classification is supported. In the fourth document, the authors tested two pre-trained radiomics-based CNN architectures (Inception-ResNet-v2 and ResNet-152) to classify colorectal neoplasms into three types of sets automatically: 7-class (T1-4 colorectal cancer, high-grade dysplasia, tubular adenoma, vs. non-neoplasms), 4-class (neoplastic vs. non-neoplastic – advanced vs. early CRC vs. adenoma vs. healthy), and 2-class (neoplastic versus non-neoplastic and advanced versus non-advanced lesions) . The CNNs were trained on a South Korean dataset (3453 colonoscopy images, 1446 patients) and temporally and geographically validated on 240 images (and as many patients) from another institution. CAM was used to highlight its decisions. The best-performing architecture was ResNet-152 for 7-way and 4-way diagnoses, but Inception-ResNet-v2 achieved better results on binary classifications. In addition, the model's performance was compared with one novice and two experienced endoscopists with six months and more than five years of colonoscopy experience, respectively. Although resulting in high accuracy, neither CNN architecture could outperform the endoscopists. Furthermore, this retrospective study only considered three types of diseases and white-light colonoscopy images. The second set of articles was devoted to predicting outcomes from MRI scans in patients with colorectal cancer undergoing neoadjuvant chemotherapy (NCRT), accruing 143 MRIs from 118 patients and a mean AUC and accuracy of 0.77 and 81.9%, respectively . The first was a prospective study using a multipath CNN on MRI scans (diffusion kurtosis and T2-weighted) . The authors used a dataset of 412 patients (290 for development and 93 for temporal validation) with locally advanced rectal adenocarcinoma scheduled for NCRT. The researchers developed three multipath CNN-based models: one to preoperatively predict pathologic complete response (pCR) to neoadjuvant chemoradiotherapy, one to assess tumor regression grade (TRG) (TRG0 and TRG1 vs. TRG2 and TRG3), and one to predict T downstaging. In addition, the authors evaluated the models' utility by comparing two radiologists' – with 10 and 15 years of experience – performance with and without their assistance. The results showed excellent performance in predicting pCR, superior to the assessment by the two radiologists, whose error rate was also reduced when assisted by the DL model. Although with lower performance, the TRG and T downstaging models also achieved promising results with an AUC of 0.70 and 0.79, respectively (although not outperforming the clinicians). Nevertheless, this monoinstitutional research required manual delineation, and interobserver variability was not analyzed. Moreover, further validation studies are necessary to assess performance with different MRI scanners. The second group of researchers developed an MRI-based CNN (DC3CNN) to predict tumor regression grade (assessment of tumor size) in response to NCRT in patients with colorectal liver metastases . The authors used prospective internal (328 lesions from 155 patients) and retrospective external cohorts (61 images, 25 patients) to collect pre and post-treatment T2-weighted- and DW-MRI scans. The model surpassed the diagnostic accuracy of RECIST, the most commonly used criteria for clinical evaluation of solid tumor response to chemotherapy. However, the study was retrospective, and further studies are needed to validate its performance in larger ethnically diverse patient populations. Lastly, only one model assessed postoperative survival of colorectal cancer using text-based data . The model was trained on the SEER database (364 316 patients) and externally validated (temporally and ethnically) on a Korean dataset (1 572 subjects, 607 women, 965 men). The authors compared 4 ML algorithms, namely logistic regression, DTs, RFs, and LightGBM, to obtain an optimal prognostic model. The best-performing model – LightGBM – outperformed TNM-7 in predicting survival for all tested periods (1, 2, 3, 4, 5, 6, 8, and 10 years). Still, data were collected retrospectively from a public database and a single institution using only text-based data, so prospective studies are necessary, and clinicopathological, molecular, and radiologic variables should also be incorporated. Esophageal Cancer Three studies involved esophageal cancers. Two papers studied neoplasia detection in patients with Barrett’s esophagus, a medical condition resulting from long-term acid-reflux damage, causing esophageal tissue lining to thicken and become irritated, increasing cancer risk . The same group of researchers conducted both studies: the first paper describes model development for detection , while the second encompasses its tuning and update to include location . The authors proposed a multi-stage pretraining approach that involved training a CNN learning model on 494,355 gastrointestinal images before fine-tuning it on a smaller dataset of medical images specific to Barrett's neoplasia. The model was trained with images from different endoscopes. In the first paper , using data from separate institutions, the authors used a retrospective dataset of early Barrett’s neoplasia for primary validation (80 patients, unknown proportion) and a second prospectively acquired dataset (80 patients and images) to compare their model’s performance against fifty-three endoscopists (17 seniors, 8 juniors, 18 fellows, and 10 novices). In the second paper, the researchers validated their model on three prospective datasets: one with clinically representative images (80 individuals), one with subtle lesions (80 subjects), and one in a live setting with dysplastic and nondysplastic patients (ten each) . It showed excellent performance on the three external validation datasets, and its detection and location performances were also compared against the 53 experienced endoscopists on the subtle lesions. The CAD system outperformed all 53 endoscopists for all tested metrics in both papers, obtaining an average accuracy, sensitivity, and specificity of 87.9%, 91.7%, and 84.16%, respectively. The models developed in both articles performed similarly and were tested in clinically realistic scenarios, with an average accuracy, sensitivity, and specificity of 88.45%, 91.25%, and 85.63%, respectively, enhancing CNNs’ predictive power. Additionally, a retrospective study evaluated cancer-specific survival for esophageal adenocarcinoma and squamous cell carcinoma according to individual treatment recommendations . The authors trained a deep-, regression-, and text-based survival neural network (DeepSurv, multi-layer perceptron) using the SEER database (6855 patients) and validated it on 150 women and 350 men from their institution (China). Additionally, prognostic performance was compared against TNM-8, having exceeded it. However, only one medical center was used, and research was not performed in an accurately representative clinical setting. Gastric Cancer In five articles, models were developed for gastric-related tasks. The first three studies had a diagnostic component. In the first research, the authors developed two models – GastroMIL and MIL-GC –, training them on WSIs from H&E slides magnified 30 times collected from TCGA and a Chinese institution. They also temporally and geographically validated them with 175 WSIs from 91 patients from NHGRP . GastroMIL used an ensemble of a CNN and an RNN to distinguish gastric cancer from normal gastric tissue images. Its performance was compared against one junior and three expert pathologists. MIL-GC, a regression-based model, was created to predict patients’ overall survival. Besides WSIs, MIL-GC uses clinical data, namely survival state, overall survival time, age, sex, tumor size, neoplasm histologic grade, and pathologic T, N, M, and TNM-8 stages. The deep learning models achieved high performance in both tasks, with an overall accuracy of 92% for diagnosis and a C-index of 0.657 for prognosis prediction in the external dataset. Compared to human performance, GastroMIL outperformed the junior pathologist in accuracy and sensitivity but was surpassed by the experienced pathologists (in accuracy, sensitivity, and specificity). However, the tested cohorts were retrospective and had unbalanced survival times, and clinical utility was not evaluated for the prognostic model. The second study used a CNN (ResNet-50) for real-time gastric cancer diagnosis . The model was developed with 3 407 endoscopic images of 666 patients with gastric lesions from two institutions. The DCNN model was tested on a temporally different dataset of endoscopic videos from a separate institution (54 videos from 54 patients), and performance was compared against 20 endoscopists (6 experts, 14 novices). The model achieved better performance than any of the endoscopists, and diagnostic accuracy, sensitivity, and specificity increased for all clinicians while assisted by the model. Nevertheless, despite decreasing the aggregate diagnostic time from 4.35 s to 3.01 s, it increased experts’ by 0.10 s. In addition, the diagnostic model was only tested on high-quality images, and the validation dataset was small and domestic. Although slightly less sensitive than Gastro-MIL (93.2% vs. 93.4%), the model developed in achieved the best accuracy and sensitivity, evidencing that endoscopic images and videos might be more appropriate to diagnose gastric cancer. The third model was created using endoscopic ultrasonography images (EUS) for the differential diagnosis of gastric mesenchymal tumors, including GISTs, leiomyomas, and schwannomas . This model was trained with EUS from three Korean institutions and tested on a temporally separate set of 212 images from the same centers (69 patients, 38 female, 31 male). A sequential analysis approach was adopted using two CNNs: the first classifies the tumor as GIST or non-GIST; for non-GISTs, the second CNN classifies it as either a leiomyoma or schwannoma. The results were compared against junior ( n = 3, less than 200 examinations) and expert endoscopists ( n = 3, more than 500 examinations) who evaluated the same images, having surpassed them in both types of classification. However, this study was retrospective and involved a small number of patients, and the types of equipment used to perform ultrasounds varied considerably across the facilities. The last two papers concerned outcome predictions. The first presents a multi-institutional study that uses multitask deep learning to predict peritoneal recurrence and disease-free survival in gastric cancer patients after curative-intent surgery based on CT images . Supervised contrastive learning and a dynamic convolutional neural network were combined to achieve this purpose, and Grad-CAM was used to explain the model’s decisions. The model included CT scans from three patient cohorts, and external validation included 1 043 patients (329 women, 714 men) and as many images from another Chinese institution. In addition, the authors investigated clinician performance for peritoneal recurrence prediction with and without the assistance of the AI model, having found that performance was significantly enhanced after integrating it and that the model alone surpassed all physicians. Nonetheless, only East Asian patients were included in this retrospective study, which was not performed in a real clinical setting, and sensitivity was only reported for one of the clinicians. The last study discusses the use of CT radiomics to predict the response of advanced gastric cancer to neoadjuvant chemotherapy and to detect pathological downstaging at an early stage . The authors trained two SVCs on 206 patients who had undergone three or four cycles of chemotherapy and externally validated them on two testing cohorts, which were also used for benchmarking detection against RECIST. The first testing cohort consists of temporal validation (40 patients and CTs, 13 women, 27 men), while the second differs in the number of chemotherapy cycles (46 individuals and CTs, 10 women, 36 men). Performance for the detection model surpassed RECIST in both cohorts, and, except for sensitivity, the response prediction model also produced positive results. However, retrospective data and a small, unbalanced sample size constrain this study, which was not evaluated in a clinically representative setting. Liver Cancer Two models were developed for liver cancer-related predictions. The first aimed at classifying hepatocellular carcinomas and cholangiocarcinomas (differential diagnosis) . The authors developed a web-based (cloud-deployed AI model and browser-based interface) CNN (DenseNet architecture) using WSIs from H&E slides magnified 40 times and used Grad-CAM to increase the model’s explainability. The training dataset was obtained from TCGA (70 slides from 70 unique patients). The external validation dataset was collected from the Department of Pathology at Stanford University Medical Center (80 slides from 24 women and 56 men). The model achieved a diagnostic accuracy of 84.2% in the validation cohort. Diagnostic performance was also compared to that of 11 pathologists. Except for the two unspecified pathologists, performance (AUC) increased for all clinicians when assisted by this tool. However, the pathologists only had access to the WSIs (as opposed to being complemented with clinical data), the model required manual intervention for patch selection, and the study was retrospective with a small sample size (development and external validation with a total of 150 WSIs and patients). The second model was designed to predict three-year overall survival for intrahepatic cholangiocarcinoma patients after undergoing hepatectomy using an ensemble of Random Forests, XGBoost, and GBDT . Using a single quaternary Chinese institution, the authors collected 1390 patients for training and 42 patients (12 women, 30 men) for external temporal validation. Results were compared against the TNM-8 and LCSGJ staging systems, with model performance exceeding that of the routinely used tools. Nonetheless, this was a monoinstitutional endeavor limited to a small number of Asian patients. Furthermore, only six prognostic factors were used: carcinoembryonic antigen, carbohydrate antigen 19–9, alpha-fetoprotein, pre-albumin, and T and N stages. Endocrine system Three papers described prognostic models for cancers in organs affecting the endocrine system (pancreas and thymus), whose results are depicted in Table . Pancreatic Cancer The first two studies assessed survival for pancreatic ductal adenocarcinoma (PDAC) patients but adopted disparate research designs and clinical inputs . The first group of researchers used a regression-based random survival forest model to prognosticate patients with advanced pancreatic cancer . Aimed at predicting overall survival for patients with unresectable PDAC, the model was developed with clinical data and CT scans from a German institution (203 patients). It was temporally and geographically validated using only text-based clinical data from patients with liver metastases from the same country (8 women, 14 men) and compared against mGPS, having outperformed it. Additionally, the authors used SHAP to explain their model, finding that inflammatory markers C-reactive protein and neutrophil-to-lymphocyte ratio had the most significant influence on its decision-making. Nonetheless, only twenty national patients were used to validate the model externally, and different types of inputs were used for training and testing. The second set of authors used an ensemble of ML methods – ANN, logistic regression, RF, GB, SVM, and CNNs (3D ResNet-18, R(2 + 1)D-18, 3D ResNeXt-50, and 3D DenseNet-121) – to predict 2-year overall and 1-year recurrence-free survival for PDAC patients after surgical resection . The classifier was trained and tuned using 229 patients and temporally validated with CECT images and seventeen clinical variables from the same South Korean institution (53 CECTs from 27 women and 26 men). Grad-CAM was used to explain the model’s decisions, and comparisons were made against TMN-8 to evaluate clinical utility. Although more accurate, specific, and with a higher PPV than TNM-8, it was less sensitive for both predictions and had a lower NPV for overall survival prediction. Furthermore, tumor margins were manually segmented, and the model did not consider histopathologic data. Thymic Cancer One study was designed for the simplified risk categorization of thymic epithelial tumors (TETs), rare cancer forms . Here, three types of tumors were evaluated: low-risk thymoma (LRT), high-risk thymoma (HRT), and thymic carcinoma (TC). Three triple classification models were developed using radiomic features extracted from preoperative NECT images and clinical data from 433 patients: (i) LRT vs. HRT + TC; (ii) HRT vs. LRT + TC; (iii) TC vs. LRT + HRT. The authors compared several CT-based classifiers: logistic regression, linear SVC, Bernoulli and Gaussian Naïve Bayes, LDA, Stochastic Gradient Descent, SVM, DT, kNN, MLP, RF, AdaBoost, gradient boosting, and XGBoost. Combined with clinical data, the SVM model demonstrated the best performance for predicting the simplified TETs risk categorization. In addition, the SVM model was validated in a temporally different cohort using images from 5 types of scanners (76 scans and patients, 33 women, 48 men). Finally, its diagnostic performance was compared against three radiologists (3, 6, and 12 years of experience), having exceeded them regarding AUC (0.844 versus 0.645, 0.813, and 0.724) but not for other metrics (accuracy, sensitivity, and specificity). Caveats include the reduced amount of patients, low number of thymic carcinomas, and incomplete automation of the models. Genitourinary system Table illustrates the models developed for genitourinary cancers, including the bladder, cervix, prostate, and uterus. Bladder Cancer From the retrieved models, only one assesses outcomes for primary bladder cancers . This article presents a CNN-based strategy to predict the muscular invasiveness of bladder cancer based on CT images and clinical data. The model was developed with 183 patients. Its performance was tested on an independent institution's temporally and geographically different validation cohort of patients with urothelial carcinoma (13 women, 62 men, and as many images). The model’s predictions were juxtaposed with diagnoses from two radiologists with nine and two years of experience, having achieved better accuracy and specificity than the two clinicians but a lower sensitivity. Overall, the authors found that the deep learning algorithm achieved a high accuracy rate in predicting muscular invasiveness, an essential factor in determining the prognosis and treatment of bladder cancer. However, the study is limited by its retrospective nature, exclusion of tumors not visible in CT images, and small sample size. Cervical Cancer Similarly, primary tumors of the cervix were only screened in one paper . Here, the authors trained an ensemble of convolutional and recurrent neural networks on whole-slide images from patients' cervical biopsies and 79 911 annotations from five hospitals and five kinds of scanners. The system comprises (i) two CNNs – the first scans WSIs at low resolution and the second at high resolution – to identify and locate the ten most suspicious areas in each slide; (ii) and an RNN to predict corresponding probabilities. The system classifies squamous and glandular epithelial cell abnormalities as positive (neoplastic) and normal findings as negative for intraepithelial lesions or malignancies (non-neoplastic). The method was externally validated on multi-center independent test sets of 1 565 women (1 170 without additional conditions and 395 with HPV), and classification performance was compared against three cytopathologists. Although obtaining promising results and surpassing clinician performance for both types of women, the authors highlight that the model was designed for the general women population, implying that further refinements are required for specific comorbidities. Prostate Cancer Two models were developed for prostate-cancer-related classifications using multiparametric MRI scans . In the first paper, the authors describe the development of Autoprostate, a system employing deep learning to generate a report summarizing the probability of suspicious lesions qualifying as clinically significant prostate cancer (CSPCa) . The authors trained their approach on the PROSTATEx dataset (249 men), externally validated it on the PICTURE dataset (247 patients), and compared its reports (with post-thresholding and false positive reduction) to those generated by a radiologist with ten years of experience. The system achieved a high level of agreement with the human reports (surpassing the radiologist in AUC and specificity) and could accurately identify CSPCa. However, this study was retrospective, a single (public) dataset was used for external validation, and only two types of prostate lesions were considered. The second article presented an ML-based approach for prostate cancer risk stratification using radiomics applied to multiparametric MRI scans . In this retrospective, monoinstitutional study, the authors compared seven classification algorithms: logistic regression, linear, quadratic (Q), cubic, and Gaussian kernel-based SVM, linear discriminant analysis, and RF. After training with 68 patients, the best-performing method – QSVM – was validated on a temporally independent dataset (14 high- and 39 low-risk patients). Its performance was compared against PI-RADS v2, having found that the model could accurately predict the risk of clinically significant prostate cancer. Although the classifier performed equivalently to PI-RADS v2 regarding AUC, it performed substantially better in class-specific measures (F1-score, sensitivity, and PPV), especially for the high-risk class. However, the study is limited by its retrospective nature and small sample size from a single source. Uterine Cancer Two studies for primary cancers focused on classifying lesions of the endometrium, the layer of tissue lining the uterus . In the first article, using 245 women as the training cohort, the authors compared nine models – logistic regression (LR), SVM, stochastic gradient descent, kNN, DT, RF, ExtraTrees, XGBoost, and LightGBM – to obtain an optimal algorithm for differential diagnosis (malignant versus benign tumors) . A radiomics score (radscore) was computed for the best-performing algorithm (logistic regression), and four models were selected using different combinations of T1-weighted, T2-weighted, and DWI MRI features: (i) the radiomics model; (ii) a nomogram, combining the radscore and clinical predictive parameters; (iii) a two-tiered stacking model, where the first tier was the clinical model and the optimal radiomics model (LR), and the second tier used the output of the first tier as the input of the multivariate LR; and (iv) an ensemble model, where the predictions obtained from the preceding clinical model and radiomics model were calculated by an accuracy-weighted average. The results showed that all four models accurately differentiated stage IA endometrial cancer and benign endometrial lesions. Furthermore, during external validation (44 MRIs from 44 women), the authors found that the nomogram had a higher AUC than the radiomics model, revealing more stable discrimination efficiency and better generalizability than the stacking and ensemble models and a radiologist with 30 years of experience (except in sensitivity). Nevertheless, data was collected from two same-country centers (Chinese institutions), only standard radiomics features were extracted, and lesions were manually segmented, which is highly time-consuming. The second paper encompassed a global-to-local multi-scale CNN to diagnose endometrial hyperplasia and screen endometrial intraepithelial neoplasia (EIN) in histopathological images . The researchers trained the CNN using a large annotated dataset (6 248 images) and tested it on a temporally different set of patients (1631 images, 135 specimens, 102 women). They found that it performed well in diagnosing endometrial hyperplasia and detecting EIN, outperforming a junior pathologist (2 years of experience) and obtaining comparable performance to a mid-level and a senior pathologist (6 and 25 years of experience, respectively). The authors used Grad-CAM to emphasize the regions the model deemed relevant for diagnosis. However, this retrospective study only used histopathological images (as opposed to WSIs). Besides, it focused solely on classifying healthy slides, hyperplasia without atypia, and endometrial intraepithelial neoplasia, thus neglecting the differentiation between benign lesions and endometrial cancer. Integumentary system As illustrated in Table , five papers studied cancers of the integumentary system, focusing on the breasts and skin. Breast Cancer Three studies developed models for cancers originating in the breasts, each with a specific purpose and using different clinical modalities. In , several text-based machine learning classifiers, namely, DTs, RFs, MLPs, logistic regression, naïve Bayes, and XGBoost, were compared to establish optimal classifiers for osteoporosis, relative fracture, and 8-year overall survival predictions. The algorithm was trained on 420 patients from a Chinese institution and geographically validated on 150 women from a separate local institution. The osteoporosis model was compared against OSTA and FRAX, the fracture model against FRAX, and the prognostic model against TNM-8. The results showed that the XGBoost classifier performed the best for the three tasks and outperformed the other clinical models. Additionally, for explainability, the authors also used SHAP for feature importance analysis for each model: (i) age, use of anti-estrogens, and molecular type are the most predictive of osteoporosis; (ii) osteoporosis, age, and bone-specific alkaline phosphatase are the best predictors for fracture; and (iii) N-stage, molecular type, and age have the highest prognostic value for overall survival. Despite its positive results, prospective studies are needed to validate the model in more diverse patient populations. In , authors explored how combining AI and radiologists can improve breast cancer screening. Using 213 694 retrospectively collected mammograms (X-ray images) from 92 585 women, it was found that the combination of radiologists and AI (CNN-based classifier) achieved the highest accuracy in detecting breast cancer. The sensitivity and specificity of the standalone AI system were significantly lower than an unaided radiologist. However, the decision-referral approach outperformed the unaided radiologist on both sensitivity and specificity for several tested thresholds. Nonetheless, the study only included mammogram images and did not consider other factors, such as patient history or clinical data, which may impact the accuracy of breast cancer screening. Furthermore, the AI algorithm used in the study was not optimized for clinical use and may require further development and testing before it can be implemented in a clinical setting. Lastly, the work developed in entailed diagnosing non-cystic benign and malignant breast lesions from ultrasonographic images. Radiomic features were extracted from the ultrasound images, and a random forest model was trained with 135 lesions and externally validated to predict malignancy for each lesion. Moreover, the performance of an experienced radiologist (8 years) was compared with and without the model’s assistance. Although not with statistical significance, the radiologist's assessments improved when using the AI system. However, the final validation population was small (66 ultrasounds from 57 women) and showed different proportions of malignant lesions. Skin Cancer Two models were developed to diagnose skin tumors using photographs, producing an average AUC, sensitivity, and specificity of 0.89, 77.1%, and 81.74% . The first was a retrospective validation study assessing the performance of deep neural networks in detecting and diagnosing benign and malignant skin neoplasms of the head and neck, trunk, arms, and legs . In a previous study, the authors trained an ensemble of CNNs (SENet + SE-ResNeXt-50 + faster RCNN) with 1 106 886 image crops from South Korean patients to detect potential lesions and classify skin malignancies. Here, performance was tested on three new temporal and geographical validation datasets of skin lesions (two national, one international, 46 696 photographs from 10 876 patients): (i) one dataset was used to compare the model’s classification performance against 65 attending physicians in real-world practice; (ii) one’s goal was to evaluate classification performance against with 44 dermatologists in an experimental setting; and (iv) the last two were meant to predict exact diagnosis (1 of 43 primary skin neoplasms) in a local (South Korean) and an international (UK, 1 300 images) dataset, with the first also being compared against physicians. In (i) and (ii), performance was calculated for high specificity and high sensitivity thresholds. The algorithm was more sensitive and specific than the dermatologists in the experimental setting. However, attending physicians outperformed it in real-world practice in all tested metrics (sensitivity, specificity, PPV, and NPV). In addition, the model only dealt with high-quality clinical photographs, and there was a lack of ethnic diversity in the study population. The second paper presented a set of CNNs – DenseNet-121 (Faster R-CNN and deep classification network) – developed to detect malignant eyelid tumors from photographic images . The researchers used a 1 417 clinical images dataset with 1 533 eyelid tumors from 851 patients across three Chinese institutions (one for development and two for external validation). Besides using Grad-CAM for interpretation, the AI’s performance on the external dataset (266 pictures from 176 patients) was compared to three ophthalmologists: one junior, one senior, and one expert (3, 7, and 15 years of experience, respectively). It surpassed the junior and senior ophthalmologists’ performance and achieved similar results to the expert. Notwithstanding its potential, the system still needs evaluation on non-Asian populations and prospectively acquired datasets, and it was only developed for detection (it cannot provide a specific diagnosis). Respiratory system and associated tissues Thirteen papers addressed respiratory system cancers, which predominantly concerned the lungs, but also included the larynx, nasopharynx, and mesothelium (Table ). Lung Cancer Ten approaches were developed for lung cancer assessments. The first document describes a validation study of a CNN-based tool (DenseNet) designed to predict the malignancy of pulmonary nodules . The model was previously trained with the NLST dataset and was now externally validated in 3 UK centers with different CT scanners (1 397 CECTs and NECTs, 1 187 patients of unknown gender ratio). The authors also evaluated its clinical utility by comparing it to the Brock Model. Although slightly less specific than the Brock model, the detection algorithm developed by the authors had a higher AUC and sensitivity. Despite having undergone international validation, prospective studies in ethnically diverse populations are still amiss. The second paper involved developing and validating a model to predict the malignancy of multiple pulmonary nodules from CT scans and eleven clinical variables . The study analyzed data from various medical centers. Ten ML methods were compared to identify the best malignancy predictor: AdaBoost, DT, Logistic Regression, Linear SVM, Radial Basis Function Kernel SVM, NB, kNN, Neural Net, Quadratic Discriminant Analysis, RF, and XGBoost. The best-performing model – XGBoost – was tested on three datasets. The first was retrospective, compiled from 6 institutions (five from China and one from South Korea), used for primary external validation (220 patients, 583 CT scans), and compared against four well-established models: Brock, Mayo, PKU, and VA. The second retrospective dataset was used for generalizability, containing patients from a Chinese institution with solitary pulmonary nodules (195 patients and images, 110 women, 85 men), whose results were also compared against the four just-mentioned models. The third and last dataset included data from 4 Chinese centers and was collected prospectively for secondary validation and comparisons against clinicians (200 CTs, 78 patients, 51 women, 27 men). This comparison involved three thoracic surgeons and one radiologist, who achieved an average sensitivity of 0.651 and specificity of 0.679. The model significantly outperformed this average and each clinician’s AUC, as well as in all comparisons against the routinely used models. In addition, SHAP was used to identify the most predictive nodule characteristics, finding that the model's most predictive features were nodule size, type, count, border, patient age, spiculation, lobulation, emphysema, nodule location, and distribution. Nonetheless, besides not reporting individual clinician sensitivity and specificity in the prospective cohort, the drawbacks of this study include only assessing typical high-risk patients and the lack of validation with different ethnicities. The work in involved a CNN-based model for predicting the presence of visceral pleural invasion in patients with early-stage lung cancer. The deep learning model was trained using a dataset of CT scans from 676 patients and externally validated on a temporally different cohort from the same South Korean institution (141 CTs from 84 women and 57 men). Besides using Grad-CAM to evidence its decisions, this CNN can adapt its sensitivity and specificity to meet the clinical needs of individual patients and clinicians. The model achieved a performance level comparable to three expert radiologists but did not surpass it except in PPV. Besides, these are results from a monoinstitutional retrospective study where geographical validation was not performed. In addition to using a small number of patients, data was also imbalanced, and the model was not fully automated (required manual tumor annotations). The fourth article concerns developing an EfficientNetV2-based CNN system to predict the survival benefit of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in patients with stage IV non-small cell lung cancer . The model was developed with accessible pre-therapy CT images from five centers and externally validated on a monoinstitutional dataset from a national dataset (China, 92 CTs from 92 patients). The authors also compared radiologists' and oncologists' (three each, 2, 5, and 10 years of experience) performance with and without ESBP. The results showed that, while assisted by the system, all radiologists improved their diagnostic accuracy, sensibility, specificity, PPV, and NPV (except for the trainee oncologist, who achieved better sensitivity without the model). However, prospective studies in ethnically rich cohorts are still necessary to implement this tool in clinical practice. The fifth study aimed at finding optimal predictors of two-year recurrence, recurrence-free survival, and overall survival after curative-intent radiotherapy for non-small cell lung cancer . Ten text-based ML models were trained on 498 patients and compared: ANN, Linear and Non-linear SVM, Generalized Linear Model, kNN, RF, MDA, Partial Least Squares, NB, and XGBoost. The best-performing models were as follows: (i) an ensemble of kNN, NB, and RF for recurrence classification; (ii) kNN for recurrence-free survival prediction; and (iii) a combination of XGBoost, ANN, and MDA for overall survival. The three optimal predictors were externally validated using routinely collected data from 5 UK institutions (159 seniors, 71 women, 88 men) and compared against TNM-8 and WHO performance status. The recurrence and overall survival models outperformed both routinely used systems, but these tools surpassed the recurrence-free survival predictor’s performance. Moreover, this study was retrospective and had a small sample size with missing data. The sixth study was designed to identify high-risk smokers to predict long-term lung cancer incidence (12 years) . In this paper, the authors developed a convolutional neural inception V4 network based on low-dose chest CT images, age, sex, and current versus former smoking statuses. The CNN was trained using patients from the PLCO trial and externally validated on data from the NLST randomized controlled trial (2456 women and 3037 men from 33 USA institutions). The model was also compared against PLCOm2012 to evaluate clinical utility, having exceeded its performance for all assessed metrics (AUC, sensitivity, specificity, PPV, and NPV). However, this study was retrospective, lacked ethnic diversity, and was not evaluated in a clinically realistic scenario. Additionally, information from symptomatic patients was unavailable due to using data from a screening trial. In the seventh article, a CNN-based model was developed for the automated detection and diagnosis of malignant pulmonary nodules on CECT scans . The algorithm was externally validated on four separate datasets with ethnic differences (three from South Korea and one from the USA, amounting to 693 patients and CTs). Furthermore, the diagnostic performance of 18 physicians (from non-radiologists to radiologists with 26 years of experience) was compared while assisted and not assisted by the algorithm for one dataset. The model achieved an excellent performance in the four tested datasets, outperforming all clinicians, and the professionals’ accuracy increased while aided by the model for all tested groups. Nonetheless, the model was undertrained for small nodules (< 1 cm) and trained only for malignant nodule detection for one type of CT (posterior-anterior projections), and the study was retrospective and not representative of a real-world clinical setting. The eighth algorithm consisted of a multilayer perceptron (Feed-Forward Neural Network) paired with a Cox proportional hazards model to predict cancer-specific survival for non-small cell lung cancer . The text-based model was trained using the SEER database and externally validated on patients from a Chinese tertiary pulmonary hospital (642 women, 540 men). It was compared against TNM-8, having outperformed it with statistical significance. Although tested with real-world clinical data, prospective multi-institutional studies are needed before the deep learning model can be used in clinical practice. The ninth article described developing, validating, and comparing three CNN models to differentiate between benign and malignant pulmonary ground-glass nodules (GGNs) . The first CNN only used CT images. The second CNN used clinical data: age, sex, and smoking history. The third was a fusion model combining CTs and clinical features, achieving the best performance. This model was temporally and geographically validated with 63 CT scans from 61 patients (39 women, 22 men). Its classification performance was compared against two radiologists (5 and 10 years of experience) for clinical utility assessment. Despite performing satisfactorily in external validation, the model was surpassed by both clinicians in accuracy, sensitivity, and NPV, only producing higher results for specificity and NPV. Furthermore, this study was retrospective, and validation was neither international nor evaluated in a correct clinical setting. In the tenth and final paper, a Neural Multitask Logistic Regression (N-MTLR) network was developed for survival risk stratification for stage III non-small cell lung cancer . The text-based deep learning system was trained on 16 613 patients from the SEER database and externally validated on subjects from a Chinese institution (172 patients, 39 women, 133 men). The results in the external dataset showed that the DSNN could predict survival outcomes more accurately than TNM-8 (AUC of 0.7439 vs. 0.561). The study results suggest that the deep learning system could be used for personalized treatment planning and stratification for patients with stage III non-small cell lung cancer. However, prospective studies in multi-institutional datasets are still required. Laryngeal, Mesothelial and Nasopharyngeal Cancers Three models were developed to assess tumors of other elements of the respiratory system. In , the authors trained a CNN (GoogLeNet Inception v3 network) with 13 721 raw endoscopic laryngeal images – including laryngeal cancer (LCA), precancerous laryngeal lesions (PRELCA), benign laryngeal tumors (BLT), and healthy tissue – from three Chinese institutions (1 816 patients). External validation was performed on 1 176 white-light endoscopic images from two additional institutions in the same country (392 patients), testing the model for binary classification – urgent (LCA and PRELCA) or non-urgent (BLT and healthy) – and between the four conditions. Predictions for both classification types were compared against three endoscopists (3, 3 to 10, and 10 to 20 years of experience). In two-way classification, the algorithm was less accurate than one endoscopist and less sensitive than two but outperformed all clinicians in four-way diagnostic accuracy. Still, this accuracy was relatively low (less than 80%), the study was retrospective, and all tested laryngoscopic images were obtained by the same type of standard endoscopes. Cancers of the mesothelium were approached in a single retrospective multi-center study . The paper uses DL to distinguish between two types of mesothelial cell proliferations: sarcomatoid malignant mesotheliomas (SMM) and benign spindle cell mesothelial proliferations (BSCMP). SMMs and BSCMPs are difficult to distinguish using traditional histopathological methods, resulting in misdiagnoses. The authors propose a new strategy—SpindleMesoNET—that uses an ensemble of a CNN and an RNN to analyze WSIs of H&E-stained mesothelial slides magnified 40 times. The model was trained on a Canadian dataset, externally validated on 39 images from 39 patients from a Chinese center, and compared against the diagnostic performance of three pathologists on a referral test set (40 WSIs from 40 patients). The accuracy and specificity of SpindleMesoNET on the referral set cases (92.5% and 100%, respectively) exceeded that of the three pathologists on the same slide set (91.7% and 96.5%). However, the pathologists were more sensitive than the diagnostic model (87.3% vs. 85.3%). In addition, the study had a minimal sample size, and only AUC was reported for the external validation dataset (0.989), which, although considerably high, is insufficient to assess the model’s effectiveness. The last study entailed developing and validating a CNN-based model to differentiate malignant carcinoma from benign nasopharyngeal lesions using white-light endoscopic images . Malignant conditions included lymphoma, rhabdomyosarcoma, olfactory neuroblastoma, malignant melanoma, and plasmacytoma. Benign subtypes encompassed precancerous or atypical hyperplasia, fibroangioma, leiomyoma, meningioma, minor salivary gland tumor, fungal infection, tuberculosis, chronic inflammation, adenoids or lymphoid hyperplasia, nasopharyngeal cyst, and foreign body. The model was trained on 27 536 images collected retrospectively (7 951 subjects) and temporally (prospectively) externally validated with 1 430 images (from 355 patients) from the same Chinese institution. Diagnostic performance was compared against 14 endoscopists: (i) three experts with more than five years of experience; (ii) eight residents with one year of experience; and (iii) interns with less than three months of experience. Except for the interns’ sensitivity, the model’s diagnostic performance surpassed the endoscopists in all tested metrics. However, data were collected from a single tertiary institution, and more malignancies should be included. Although not developed for the same cancer type, the two cancer detection studies for the larynx and nasopharynx are comparable due to using white-light endoscopic images. Both used CNNs and involved more than 300 patients and 1000 images, but the optimal diagnostic performance – although less sensitive (72% vs. 90.2% in ) – was achieved for the GoogLeNet Inception v3 network CNN with an AUC of 0.953, an accuracy of 89.7%, and a specificity of 94.8%, enhancing the value of pre-training CNNs. Skeletal system Four studies using different imaging techniques were designed to diagnose bone cancers, producing an average AUC of 0.88 (Table ). The first two radiomics-based models were developed for the binary classification of atypical cartilaginous tumors (ACT) and appendicular chondrosarcomas (CS) . In , a LogitBoost algorithm was temporally and geographically validated on 36 PET-CT scans from 23 women and 13 men. Besides externally validating their method, the authors evaluated clinical utility by comparing its diagnostic performance against a radiologist. The model performed satisfactorily in all calculated metrics (AUC, accuracy, sensitivity, PPV, and F1-score), but its accuracy was lower than the radiologist. In addition, only non-contrast PET-CT scans were included in the analyses. In the following year, research performed by the same first author evaluated bone tumor diagnosis from MRI scans . Radiomic features were extracted from T1-weighted MRI scans, and an ExtraTrees algorithm was trained to classify the tumors. On an external validation dataset of 65 images (34 women, 31 men), the model achieved a PPV, sensitivity, and F1-score of 92%, 98%, and 0.95 in classifying ACTs, while 94%, 80%, and 86% for the classification of grade II CS of long bones, respectively (weighted average is presented in Table ). The model's classification performance was compared against an experienced radiologist (with 35 years of experience) to assess clinical utility, finding that it could not match the radiologist's performance. Using SHAP, it was also found that certain radiomic features, such as the mean and standard deviation of gradient magnitude and entropy, significantly differed between the two tumor types. Drawbacks include the study’s retrospective nature, using only one type of MRI, and over-representing appendicular chondrosarcomas compared to cartilaginous tumors in the study population. The second set of papers used neural networks to differentiate benign from malignant bone tumors from X-ray images . On the one hand, in , a CNN (EfficientNet-B0) was developed on a dataset of 2899 radiographic images from 1356 patients with primary bone tumors from 5 institutions (3 for training, 2 for validation), including benign (1523 images, 679 patients), intermediate (635 images, 317 patients), and malignant (741 images, 360 patients) growths. The CNN model was developed for binary (benign versus not benign and malignant versus not malignant) and three-way (benign versus intermediate versus malignant) tumor classification. The authors also compared the model’s triple-way classification performance against two musculoskeletal subspecialists with 25 and 23 years of experience and three junior radiologists with 6, 1, and 7 years of experience. The deep learning algorithm had similar accuracy to the subspecialists and better performance than junior radiologists. However, only a modest number of patients was used for validation (639 X-rays from 291 patients), tumor classes were unbalanced (smaller number of benign bone tumors compared to intermediate and malignant), and the pipeline was not fully automated. In contrast, other authors resorted to a non-deep ANN that uses radiomic features extracted from X-ray images and demographic data to classify and differentiate malignant and benign bone tumors . The ANN was developed on 880 patients with the following conditions: (i) malignant tumors: chondrosarcoma, osteosarcoma, Ewing’s sarcoma, plasma cell myeloma, non-Hodgkin lymphoma B cell, and chordoma; (ii) benign subtypes: osteochondroma, enchondroma, chondroblastoma, osteoid osteoma, giant cell tumor, non-ossifying fibroma, haemangioma, aneurysmal bone cyst, simple bone cyst, fibrous dysplasia. The method was externally validated on 96 patients from a different institution, and performance was compared against four radiologists (two residents and two specialized). The model was more sensitive than both radiologist groups but was outperformed by the specialized radiologists in accuracy and specificity. In addition, the model requires manual segmentations and can only distinguish between benign and malignant tumors and not specific subtypes. Metastases (Secondary Tumors) As shown in Table , five studies entailed the assessment of metastatic cancer, that is, secondary tumors spread from different tissues. From these, three focused on cancer spread to organs , while two evaluated metastasized nodes. Organ metastases In , models were created to predict the risk of bone metastasis and prognosis (three-year overall survival) for kidney cancer patients. To achieve optimal performance, the researchers developed and compared eight ML models: DTs, RFs, MLPs, Logistic Regression, Naïve Bayes BS classifier, XGBoost, SVMs, and kNN. The text-based models were trained with 71 414 patients from the SEER database (USA) and externally validated with 963 patients from a Chinese institution (323 women, 640 men). The results showed that their XGBoost-based models had the best accuracy in predicting bone metastasis risk and prognosis. The risk prediction model (diagnosis) outperformed TNM-7 only regarding AUC (0.98 vs. 0.93), while the prognostic model exceeded TNM-7’s predictions for all tested metrics (AUC, accuracy, sensitivity, PPV, and F1-score). Using SHAP analysis, the authors also unveiled that the key factors influencing these outcomes were age, sex, and tumor characteristics. Although trained on ethnically different patients, these models were only validated on Asian subjects and not compared against clinicians, so further studies are required to establish clinical validity and utility. The second paper explores the effectiveness of a deep learning-based algorithm (CNN) in detecting and classifying liver metastases from colorectal cancer using CT scans . In this South Korean monoinstitutional study, 502 patients were used for training, and temporally different patients (40 with 99 metastatic lesions, 45 without metastases) were used for validation. The algorithm's detection and classification performance was compared to three radiologists (with 2, 3, and 20 years of experience in liver imaging) and three second-year radiology residents. Although showing a higher diagnostic sensitivity than both types of clinicians, the six radiologists outperformed the model in AUAFROC (detection) and false positives per patient (FPP, classification). In addition, the CT scans had been captured eight years before the analyses. The third study was conducted in a clinically realistic scenario, and the model has been implemented in practice . The model was designed to predict 3-month mortality in patients with solid metastatic tumors for several types of cancer (breast, gastrointestinal, genitourinary, lung, rare) and treatment alterations in an outpatient setting. The authors trained a Gradient-Boosted Trees Binary Classifier with observations from 28 484 deceased and alive patients and 493 features from demographic characteristics, laboratory test results, flowsheets, and diagnoses. The model was silently deployed in the patients’ EHRs for 20 months to compare its predictions against 74 oncologists. This prospective temporal validation study involved 3099 encounters from 2041 ethnically diverse patients. The model outperformed oncologists in all metrics for aggregate (general, with and without treatment alterations), gastrointestinal, genitourinary, and lung cancers but was less sensitive than the professionals for rare and breast metastatic tumors. Although currently available in medical practice, the authors note that further research is needed to validate whether using the model improves prognostic confidence and patient engagement. Node metastases Two models were developed to diagnose node metastases. In , the authors aimed to classify cervical lymph node metastasis from thyroid cancer using CT scans . The researchers had previously developed a CNN (Xception architecture) trained on a 787 axial preoperative CT scans dataset. This study validated the systems' performance on 3 838 images from 698 patients (unknown female-male ratio) and used Grad-CAM to explain the model’s reasoning. The researchers also evaluated the clinical utility of the model by comparing seven radiologists’ performance (one expert, six trainees) with and without its assistance. While aided by the system, the expert’s accuracy, sensitivity, specificity, PPV, and NPV were all found to increase, while only accuracy, specificity, and NPV improved for the trainees. This study was retrospective and conducted in a single institution, and the results obtained were not satisfying enough to justify clinical implementation. The second and last document describes developing an ultrasound-based ML model to assess the risk of sentinel lymph node metastasis (SLNM) in breast cancer patients . First, the authors compared ten algorithms to achieve an optimal model: SVM, RF, LDA, Logistic Regression, NB, kNN, MLP, Long Short-Term Memory, and CNN. The best algorithm (XGBoost) was then integrated into a clinical model, and SHAP was used to analyze its diagnostic performance. XGBoost was trained with 902 patients, and external validation consisted of 50 temporally separate women. The authors also compared their tool with a radiologist’s diagnostic evaluations (unknown years of experience). The results showed that the ML model could predict the risk of SLNM in breast cancer patients based on ultrasound image features with high accuracy (84.6%), having outperformed the radiologist. In addition, SHAP analysis deemed suspicious lymph nodes, microcalcifications, spiculation at the edge of the lesion, and distorted tissue structure around the lesion as the model’s most significant features. Nonetheless, this research was retrospective and used a minimal number of patients from a single institution with limited pathological types of breast cancer. A total of 13 708 records were identified in our search, which was last updated on September 30, 2022. As shown in Fig. , after duplicate removal and filtering by SJR ranking, the titles and abstracts of 4023 citations from Embase, IEEE Xplore, PubMed, Scopus, and Web of Science were assessed. In this stage, 3325 papers were excluded for not being machine learning-based ( n = 1204, 29.9%), using genetic variables or omics ( n = 705, 17.5%), not being externally validated (clearly mentioning performance evaluation by cross-validation or hold-out sampling, n = 587, 14.6%), not being focused on oncology ( n = 534, 13.3%), not regarding patient care or clinical decision-making (e.g., creation of data infrastructures or organizing EHRs, n = 166, 4.1%), not being primary research articles ( n = 101, 2.5%), and not including human patients ( n = 28, 0.7%). This left 698 papers eligible for full-text inspection, of which 62 were excluded for unavailability. From the remaining 636 reports, 274 (43.1%) were discarded for not assessing or quantifying clinical utility, 252 (39.6%) for not being externally validated, 17 (2.7%) for not directly concerning patient care, 13 (2%) for not reporting performance metrics, 13 (2%) for focusing on gene expression or omics, 4 (0.6%) for not containing machine learning models, 2 (0.3%) for not focusing on oncology and 1 (0.2%) secondary research paper. For example, although seemingly relevant, that is, describing external validation and comparison of diagnostic competence against pathologists, other than reporting intraclass correlation coefficients, Yang et al.'s study did not quantify clinicians' performance, which led to its exclusion. No additional relevant documents were found by screening the included studies. Finally, 56 articles were included in this scoping review. The completed form for the included studies can be found in Additional file . Table summarizes key findings from the 56 studies on patient-centered ML applications in oncology, providing an overview of algorithms, clinical applications, data types, and evaluation methods for clinical utility. The following subsections offer insights into different aspects of the data. Journals, years of publication and reporting guidelines As depicted in Fig. A, the included articles were retrieved from 31 journals with an average SJR (2021) of 2.496, from a minimum of 1.005 ( Scientific Reports ) and a maximum of 7.689 ( Gastroenterology ). Frontiers in Oncology was the most common source ( n = 9, 16.07%, SJR = 1.291), followed by eBioMedicine ( n = 6, 10.71%, SJR = 2.9) and European Radiology ( n = 5, 8.93%, SJR = 1.73) . Eight (25.8%) of these journals were primarily dedicated to methodological issues and computational methods within artificial intelligence (dashed bars in Fig. A), while the remaining twenty-three (74.2%) focused on medical applications and patient-related topics. Concerning the year of publication, although citations since 2014 were screened, only papers from 2018 and onwards met the inclusion criteria. The number of reports increased substantially after 2020, with 23% ( n = 13), 27% ( n = 15), and 43% ( n = 24) of the sources being from 2020, 2021, and 2022, respectively, versus 2% ( n = 1) in 2018 and 5% ( n = 3) in 2019 (Fig. B). While the majority did not adhere to any reporting guidelines ( n = 48, 85.714%), 3 (5.357% ) used TRIPOD , 3 (5.357% ) followed STARD 2015 (commonly used for diagnostic and prognostic studies) , and 2 used CONSORT-AI and STROBE (1 each, 1.786%, and , respectively). Lastly, caveats were not reported for a small percentage of studies (7.14%, n = 4) . Algorithms, cancer types and clinical outcomes The features of the machine learning algorithms found in the included articles are detailed in Table . Sixty-two models were described in the 56 documents, with 55.4% (31/56) of the authors explicitly mentioning which algorithms were used in the paper's abstract. Most developers opted for an ensemble approach ( n = 27, 48.2%), 26 (46.4%) for single models, and three (5.4%) for both . Of the selected studies, 50 (89.3%) were exclusively devoted to classification, 4 to regression (7.1%) , and 2 developed both types of models (3.6%) . All models were supervised except in one study (semi-supervised) , and 50% of the researchers ( n = 28) compared their systems against other ML algorithms. Apart from work developed in , where the model was silently integrated into the patients' EHRs, all models were deployed as standalone systems. Overall, 30 (53.6%) can be classified as CADx, 19 (33.9%) as CDSS, 2 (3.6%) as CADe , and 5 as both CADe and CADx (8.9%) . Regarding interfaces, most tools were desktop-based ( n = 46, 82.1%), and 10 (17.9%) were deployed as web-based applications . All websites were reported, 43 articles (76.79%) disclosed which software was used, and codes were provided for 11 models (19.6%) . Most studies were deep-learning based ( n = 36, 64.3%). From these, the most frequently reported models were Convolutional Neural Networks (CNNs), either alone (29/36, 80.55%), coupled with a Recurrent Neural Network (RNN, 3/36, 8.34%) , or with Logistic Regression (LR), a shallow ANN, Gradient Boosting (GB), a Support Vector Machine (SVM), and Random Forest (RF, 1/36, 2.78%) . Specific CNN architectures were reported for approximately 76% of the articles (25/33), which, as shown in Fig. , primarily consisted of ResNet- ( n = 9, 36%) and DenseNet-based frameworks ( n = 8, 32%), used individually or in conjunction. To overcome data scarcity, transfer learning was used in 16 of the 33 CNN-based articles 48.5%), which involves pre-training the network on a specific problem and transferring that base knowledge to a new, related task (see Table : pre-trained in column General Focus and Models ). Besides CNNs, other DL algorithms were described in four articles . Multilayer Perceptrons (MLPs) were used in three (5.56%) , two of which applied a DeepSurv architecture, a deep Cox proportional hazards feed-forward neural network . The last (2.78%) involved a neural multitask logistic regression model (N-MTLR) . The remaining documents ( n = 20, 35.7%) described a non-deep-learning-based workflow encompassing fifteen unique algorithms applied in twenty-eight configurations. From these, boosting-based techniques were the most widely reported, consisting of eXtreme Gradient Boosting (XGBoost, 6/28, 21.43%) , a Light Gradient Boosting Machine (LightGBM, 1/28, 3.57%) , LogitBoost (1/28, 3.57%) , Adaptive Boosting (AdaBoost, 1/28, 3.57%) , and Gradient-Boosted Decision Trees (GBDT, 2/28, 7.14%) . Other decision tree designs were also used, including RF (6/28, 21.43%) and extremely randomized trees (ExtraTrees, 1/28, 3.57%) . The third most reported group of algorithms were SVMs , a Support Vector Classifier (SVC) , and a Quadratic SVM (4/28, 14.28%), followed by shallow ANNs (2/28, 7.14%) and LR (1/28, 3.57%) . Lastly, Mixture Discriminant Analysis (MDA), k-nearest Neighbors (kNNs), and naïve Bayes (NB) were also found, all used in the same article (total of 3/28, 10.71%) . Regarding general cancer types, the selected papers can be broadly divided into two categories: those concentrating on primary tumors and those mainly examining metastasized (secondary) cancers. Most articles focused on primary tumors (51/56, 91.1%), although four also included metastases . These cancers can be further branched into the specific system where the malignancy was formed: (i) central nervous system (CNS), including the brain (3/51, 5.88%) ; (ii) digestive system, encompassing colorectal (7/51, 13.73%) , esophageal (3/51, 5.88%), gastric (5/51, 9.8%) , and liver cancers (2/51, 3.92%) ; (iii) endocrine system, involving cancers of the pancreas (2/51, 3.92%) and thymus (1/51, 1.96%) ; (iv) genitourinary system, consisting of bladder (1/51, 1.96%) , cervical (1/51, 1.96%) , prostate (2/51, 3.92%) , and endometrial (2/51, 3.92%) cancers; (v) integumentary system, with tumors of the breast (4/51, 7.84%) and skin (2/51, 3.92%); (vi) respiratory system, studying neoplasms of the larynx (1/51, 1.96%) , lung (10/51, 19.61%) , mesothelium (1/51, 1.96%) , and nasopharynx (1/51, 1.96%) ; and (vii) the skeletal system, comprising the bones (4/51, 7.84%) . In addition, five papers analyzed metastatic cancers (5/56, 8.9%), which can also be bifurcated into malignancies spread to nodes or organs. The former includes solid metastatic breast, lung, and gastrointestinal and genitourinary tract tumors , bone metastases in kidney cancer patients , and liver metastases from colorectal cancers . The latter encompasses thyroid cancer spread to lymph nodes and sentinel lymph node metastasis from primary breast lesions . Seventy-six cancer-related goals were addressed in the 56 documents, with an average of one task performed per paper and a maximum of three . These included the development or improvement of systems for: (i) diagnosis alone ( n = 28, 50%) or combined with detection ( n = 5, 8.93%) or prognosis ( n = 1, 1.79%) ; (ii) detection by itself ( n = 2, 3.58%) or coupled with outcome prediction ( n = 1, 1.79%) ; and (iii) outcome prediction, including prognosis ( n = 16, 28.58%) ; and risk stratification ( n = 3, 5.36%) . Finally, fifteen studies resorted to explainable AI (XAI) to increase the transparency behind the models' decisions. Unlike black-box methods, whose reasoning is indecipherable, XAI allows the creation of interpretable models to determine how each prediction was reached and which clinical predictors bore the most weight. Three packages were used for this purpose: (i) SHapley Additive exPlanations (SHAP), which can be employed in any ML algorithm ( n = 6, 40%) ; and (ii) Class Activation Mapping (CAM, n = 1, 6.67%) and Gradient-weighted CAM (Grad-CAM, n = 8, 53.33%) , explicitly developed for CNNs. Clinical inputs and populations According to the clinical variables used as input, the models validated in the 56 studies can be divided into three types: image-based (including video, n = 37, 66.1%), text-based ( n = 10, 17.9%), and mixed, using both clinical modalities ( n = 9, 16.1%). Image-based Studies A total of 335 085 high-resolution images from 112 538 patients (102 117 female, 8 215 male ) were used for classification in 36 of the 37 image-based studies and for classification (recurrence) and regression (recurrence-free survival) in the last study . Except for one paper including both pediatric and adult patients (unknown age proportion, 175 female, 116 male) and two other articles not listing the patients’ age group (698 in , unknown in , unidentified male–female ratio in both), all studies consisted of adults (111 469 patients, 101 942 women, 8 099 men). Eight studies (21.6%) extracted radiomic features from the retrieved images . The studies encompassed X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography – Computed Tomography (PET-CT) scans, endoscopic images and videos, photographs, ultrasounds, histological slides, and whole-slide images (WSI). Besides digital pictures, which are limited to the surface, these imaging techniques capture the body's internal structures. However, they differ in the way they create images and the type of information they provide. X-rays use and expose the patient to ionizing radiation to create scans . Although time- and cost-effective, these do not provide as much detail as CT or MRI scans. In this review, two studies used radiographic images (2/37, 5.4%) to: (i) classify pathologically-confirmed primary bone tumors in children and adults (639 radiographs, 175 female, 116 male) ; and (ii) for breast cancer screening in adult women ( n = 1, 213 694 X-rays, 92 585 women) . CT scans combine X-rays from different angles to create high-quality, three-dimensional images. Nevertheless, since they are generated from controlled motions of X-rays, CTs are still unfit for extracting molecular information . Furthermore, these scans subject the patient to higher radiation levels than X-rays and may require contrast agents depending on the adopted modality – contrast-enhanced CTs (CECTs) versus non-contrast CTs (NECTs). CT scans were commonly collected variables in the selected articles (8/37, 21.6%), amounting to 7 540 images from: (i) the lungs ( n = 4, 2 323 nodules, 2 113 patients) ; (ii) gastric cancers ( n = 2, 1 129 images, 352 women, 777 men) ; (iii) cervical lymph nodes ( n = 1, 3 838 images, 698 patients of unknown gender) ; and (iv) hepatic metastasis from colorectal cancer ( n = 1, 250 lesions, 31 women, 54 men) . MRI scans do not depend on radiation and use a strong magnetic field and radio waves to create detailed images. This type of imaging can be separated into two subtypes: conventional and advanced . Conventional MRI (cMRI) sequences include standard MRI protocols commonly used in clinical practice, such as (i) T1-weighted: used to identify structural abnormalities; (ii) Axial fluid-attenuated inversion recovery MRI (FLAIR), applied to identify abnormalities that affect the tissues' water content; and (iii) T2-weighted: also appropriate to assess irregularities in water content. Advanced MRI (advMRI) techniques generate deeper information regarding the tissue's function, structure, and metabolic processes, including: (i) multiparametric MRI (mpMRI), which combine several other MRI sequences to enrich its output; (ii) axial diffusion-weighted (DWI) MRI, which measure the movement of water molecules in tissues; (iii) Vascular architecture mapping (VAM) MRI, providing information about the tissue's blood vessels; (iv) Gradient echo dynamic susceptibility contrast (DSC) MRI, used to measure blood movement; (v) Quantitative blood-oxygenation-level-dependent (qBOLD) MRI, able to measure the oxygen content in the blood; (vi) General Electric-Dynamic Susceptibility Contrast (GE-DSC) MRI, which resorts to a contrast agent to measure blood flow; and (vii) Magnetic resonance spectroscopy (MRS), which calculate the levels of certain chemicals and metabolites in the tissues. Although some types of MRIs – such as MR spectroscopy and diffusion-weighted imaging – allow assessing molecular details without contrasts, most are better equipped to analyze gross internal structures and are more expensive than CTs and X-rays . MRI scans were also frequently used as input for the models, with 64 941 combined images from 8 studies (21.6%), including (i) the brain ( n = 3, 64 459 lesions, 623 women, 461 men) ; (ii) the prostate ( n = 2, 262 nodules, 300 men) ; (iii) colorectal malignancies ( n = 2, 154 images, 54 women, 64 men) ; and (iv) bones and cartilages ( n = 1, 65 scans, 34 women, 31 men) . PET scans, which are also radiation-free, allow for examining the internal body structure and underlying molecular tissues. However, these are extremely expensive, usually unavailable in routine practice, and due to their low spatial resolution, require pairing with a second modality, such as CTs and MRIs . In this review, one study (2.7%) used PET-CT scans to examine atypical cartilaginous tumors and appendicular chondrosarcomas (36 scans, 23 women, 13 men) . Similarly to X-rays, ultrasounds – which use high-frequency sound waves to create images – provide an inexpensive method to inspect organ structures without detailing underlying molecular information, with the upside of not involving radiation . Ultrasonographic imaging was mentioned in 2 articles ( n = 2, 5.4%, 328), which studied breast cancers (116 ultrasounds, 107 women) . Eight reports describe images captured with standard endoscopes ( n = 8, 24.3%, 3681 items), which cannot capture molecular features. Four studies used colonoscopic lesions from the colon and rectum (995 images, 105 women, 224 men) . Four studies analyzed endoscopic pictures of the esophagus ( n = 2, 260 images, 260 patients of unknown gender) , the larynx ( n = 1, 1 176 images, unknown number of patients) , and the nasopharynx ( n = 1, 1 430 images, 124 women, 231 men) . Lastly, one study examined endoscopic videos from intramucosal gastric cancer patients (54 videos, 38 women, 16 men) . Two studies used advanced endoscopes. One involved endoscopic ultrasonography (EUS), a technique that combines endoscopy and ultrasonography to gather gastrointestinal images ( n = 1, 2.7%, 212 ultrasounds, 38 women, 31 men) . The other resorted to endocytoscopy, a relatively new high-magnification imaging approach that allows tissue analysis at a cellular level, to collect 100 colorectal images from 89 patients ( n = 1, 2.7%, 26 women, 63 men) . A histological image is a high-resolution, microscopic image of a tissue slide after it's been processed with one or more stains to reveal its composition . This method allows distinguishing between different histological cancer subtypes but involves a long preparation time and offers a limited depth of view. One paper used hematoxylin-and-eosin (H&E)-stained histological images to study endometrium hyperplasia and intraepithelial neoplasia ( n = 1, 2.7%, 1 631 slides, 102 women) . Whole-slide images (WSIs) are virtual representations of a tissue section scanned at high resolution and magnification. WSIs are created by scanning stained histological slides and usually combine and magnify multiple slides using specialized software . This technique allows thorough tissue examination at cellular and sub-cellular levels, but it is still cost-, storage- and technically heavy. WSIs were used to feed the models in three studies (8.1%, 3 315 images), using 30 × or 40 × magnification. Two included H&E stained slides of the liver ( n = 1, 80 slides, 24 women, 56 men) and the mesothelium ( n = 1, 39 images, 39 patients of unreported gender) . One was composed of stained slides (unknown stain) for the cervical screening of women without any known conditions and with the Human papillomavirus (HPV) ( n = 1, 1565 images and women) . Finally, 46 962 digital photographs (captured with a camera) were analyzed across two documents (5.4%). Both inspected skin malignancies ( n = 2, 10 602 patients). Detailed information regarding the samples, type of CTs, MRIs, and endoscopes used in the image-based studies, as well as population details and counts (age group, total patients, female, and male), is itemized in Table . Text-based Studies The populations and specific clinical variables used in each text-based study are compiled in Table . Clinical data from 6 803 patients (2 772 women, 4 031 men, 7 861 encounters) was collected for validation across ten papers . Apart from one work including senior citizens , all studies consisted of adult patients (6 644 subjects, 2 701 women, 3 943 men). An average of 17 clinical variables was used per study (range = 6 – 31 ), encompassing information on demographics, tumoral values, and laboratory test results. The machine learning models used in 6 of the articles (60%) were exclusively developed for classification (1 960 women, 3 097 men) , while 4 (40%) solely concerned regression (812 women, 934 men) . In the four regression-based articles, the developed prognostic models assessed (i) patients with a single lesion of primary stage I to IV esophageal adenocarcinoma or squamous cell carcinoma ( n = 1, 150 women, 350 men) ; (ii) patients with pathologically confirmed and resected intrahepatic cholangiocarcinoma (12 women, 30 men) ; (iii) patients with stage I to III non-small cell lung cancer (642 women, 540 men) ; and (iv) patients in palliative care with unresectable advanced pancreatic ductal adenocarcinoma with liver metastases (8 women, 14 men) . The six classification papers included: (i) seniors with stage I to III non-small cell lung cancer treated with curative-intent radiotherapy (159 individuals, 71 women, 88 men) ; (ii) bone metastasis in kidney cancer patients with complete survival data (323 women, 640 men) ; (iii) women with primary breast cancer diagnosed by pathological examination (150 women) ; (iv) patients with primary colorectal cancer with survival-related data who underwent surgery (1 572 patients, 607 female, 965 male) ; (v) patients with confirmed stage III non-small cell lung cancer (39 women, 133 men) ; and (vi) patients with solid metastatic tumors for several types of cancer with and without alterations in treatment in an outpatient setting (3 099 encounters, 2 041 individuals, 770 women, 1 271 men) . Mixed Studies An average of 9 clinical variables (range = 3 – 17 ), 784 images, and 720 patients (range = 44 – 5 493 for both) were used in the nine mixed studies, whose information is highlighted in Table . These papers combined patients’ demographics, cancer-specific data, laboratory results, and imaging features extracted from different modalities for cancer-specific populations (7 053 images, 6 482 patients, 3 009 women, 3 478 men). Radiomics approaches were used in three studies . Six reports included CT images to study: (i) patients who underwent curative-intent resection for pancreatic ductal adenocarcinoma ( n = 1, 53 images, 27 women, 26 men) ; (ii) patients with benign and malignant pulmonary ground-glass nodules with less than 30 mm ( n = 1, 63 images, 39 women, 22 men) ; (iii) individuals with multiple lungs nodes in a post-operative setting ( n = 1, 200 images, 51 women, 27 men) ; (iv) lung cancer patients with an available baseline radiograph ( n = 1, 5 493 patients and images, 2456 women, 3037 men) ; (v) patients with muscle-invasive bladder cancer who underwent surgery ( n = 1, 75 images, 13 women, 62 men) ; and (vi) adults with pathologically confirmed thymomas and thymic carcinomas ( n = 1, 76 preoperative scans, 33 women, 48 men) . Additionally, three studies used other types of scans. One work paired breast-specific data with features derived from three types of MRI scans for women with endometrial lesions and complete clinical data (44 images, 44 women) . One paper combined patients’ age, sex, tumor type, location, and radiomic features extracted from X-rays to analyze primary bone tumors (40 women, 56 men) . Finally, one study evaluated survival- and gross-tumor-related data in conjunction with H&E slides magnified 30 times (whole-slide images) to estimate outcomes for patients diagnosed with gastric cancer (175 images, 91 patients, 60 female, 31 male) . Except for the models developed in this study, where the first used only WSIs for classification and the second used these images and clinical data for prognostication (regression), all algorithms were classifiers. Validation design, clinical settings and performance metrics Information concerning institutional, study, and validation designs, care types, datasets, clinical settings, and the number of institutions involved in validation in the selected documents is illustrated in Table . Model development and validation were performed simultaneously in most studies ( n = 50, 87.5%), while 4 (7.14%) evaluated external validity separately, and 3 (5.36%) entailed model updating and validation. Of the 56 documents included in this review, 44 (78.57%) directly reference external validation in the abstract, 10 (17.86%) indirectly mention it, and 2 (3.57%) omit this information. Overall, 74 medical datasets were used for external validation across the 56 studies, averaging 1.3 per paper (range = 1—8). All studies used real-world data acquired prospectively or collected from the patients' EHRs and imaging archiving platforms. Except for three articles using both standard and uncommon types of MRI scans and one using endocytoscopy (whose use is still growing) , all studies used text- and image-based data routinely collected in clinical practice. However, only nine reports describe external validation in clinically realistic scenarios , and solely two systems are currently implemented in practice . The papers involved several cancer-related settings, including secondary ( n = 1, 2%), tertiary ( n = 34, 61%), and quaternary (12, 21%) oncology care. However, 6 (11%) studies did not report from which centers data were retrieved, and 3 (5%) used databases without this information. Among the collected studies, 49 (87.5%) were conducted retrospectively, 3 (5.36%) were prospective, 4 (7.15%) were mixed: one performed internal validation prospectively and external validation retrospectively , one proceeded inversely , and two used both retrospective and prospective cohorts . Only one report used randomized data . Regarding validation design, 31 (55.357%) studies followed a multi-institutional approach, 14 (25%) collected information from a single center, 1 (1.786%) only used public databases, 2 (3,572%) used public multi-institutional databases, and 8 (14,286%) used both types of sources. For the multi-institutional studies (including databases), the average number of facilities used for validation was 3, with a maximum of 33 . One study did not report the number of institutions involved . The following freely available data sources were used: (i) the Surveillance, Epidemiology, and End Results (SEER) database, which covers population-based cancer registries of approximately 47.8% of the United States Population ; (ii) The Cancer Genome Atlas (TCGA, from the USA), which molecularly characterizes over 20,000 primary cancers, and contains whole-slide images ; (iii) The Cancer Imaging Archive, which hosts a large number of medical images for various types of cancer ; (iv) the Edinburgh dataset, containing data from the University of Edinburgh (Scottland, United Kingdom) ; (v) the Prostate, Lung, Colorectal, and Ovarian (PLCO) randomized trial sponsored by the by the National Cancer Institute (NCI), designed to evaluate the impact of cancer screening on mortality rates, as well as to assess the potential risks and benefits associated with screening ; (vi) the National Lung Screening Trial (NLST), a randomized controlled trial also supported by the NCI that aimed to evaluate the impact of using low-dose helical CT scans on patient mortality ; (vii) the PROSTATEx dataset, which contains a retrospective set of prostate MRI studies ; (viii) the PICTURE dataset, containing data from a single-center trial, and intended to evaluate the diagnostic accuracy of multiparametric magnetic resonance imaging (mpMRI) in men with prostate lesions ; and (ix) the National Human Genetic Resources Sharing Service Platform (NHGRP), for which we could not find any details . In two studies, models were trained using data from multiple countries. One developed their model using patients from three Chinese institutions and one center from the United States of America (USA) and validated it on a Chinese dataset ( n = 1, 1.8%) . The other gathered data from a Chinese institution and TCGA and validated their model on images from NHGRP . Additionally, one document did not report which countries were involved in their model’s development or validation . All other authors developed their model on data from a single country. These included China ( n = 19, 33.7%), the USA ( n = 12, 21.4%), South Korea ( n = 9, 16.1%), Italy and Germany (3 each, 5.4%), Japan and the Netherlands (2 each, 3.6%), and the United Kingdom (UK), Canada, and Austria (1 each, 1.8%). Besides the two abovementioned papers , twelve other studies performed international validation. Of these, six included ethnically different sources. Two authors trained their model with data from South Korea: one validated it on South Korean and American datasets , and the other validated it on a South Korean dataset and the Edinburgh dataset (UK) . Additionally, five reports mention training their model on the SEER database (USA), with four validating it with Chinese patients and one with South Korean patients . For the five remaining studies, patients with the same ethnicity were included: (i) one was developed with the NLST trial dataset (USA) and validated on data from the UK ; (ii) one was trained with data from TCGA (USA) and validated on an institution from the UK ; (iii) one used data from Italy for training and patients from The Netherlands for validation ; (iii) one trained their model on the PROSTATEx dataset (from The Netherlands) and validated it on the PICTURE dataset (from the UK) ; and (iv) one used a Chinese dataset for training and Chinese and South Korean patients for validation . Regarding validation types, 12 studies (21.48%) were limited to temporal validation from a single institution, which cannot be interpreted as a fully independent validation . Five other studies also only temporally validated their model. However, two used a multi-institutional approach (3.58%) , two (3.58%) used different data acquisition designs (retrospective internal validation and prospective external validation) , and one evaluated performance for patients at different treatment stages (1.78%) . Nine studies (16,08%) only validated their model geographically, seven within the same country , one internationally , and one with internationally and ethnically different patients . Twenty-nine reports (51.8%) included both temporal and geographical validation. Sixteen (28.57%) used local data, one evaluated temporally and geographically different patients from the same country with images captured using various scanners , and one (1.79%) used national data and mixed data acquisition (prospective internal validation and retrospective external validation) . Lastly, one study that did not report data sources validated their model on different types of computed tomography (CT) scanners . The external datasets were used to evaluate the models’ generalizability to populations differing – geographically, temporally, or both – from the development cohort. The performance metrics reported in the articles can be branched into three categories: discrimination, calibration, and processing time. For classification models, an average of 5 metrics were used to assess discrimination, up to a maximum of seven (range = 1 – 7). These consisted of (i) sensitivity, reported in 48 papers; (ii) area under the receiver operating characteristic (ROC) curve (AUC), calculated in 43 studies; (iii) specificity, used in 42 articles; (iv) accuracy, presented in 35 documents; (v and vi) positive and negative predictive values (PPV and NPV), computed in 29 and 19 reports, respectively; (vii) F1-score, considered in 13 papers; (viii) C-index, used in 2 articles ; (ix) false positive rate, reported in two papers ; (x) area under the alternative free-response ROC curve (AUAFROC) , calculated for one model; (xi) jackknife alternative free-response ROC (JAFROC), also computed for one algorithm ; and (xii) Softspot (Sos) and Sweetspot (Sws) flags, both used in the same two papers . However, decision thresholds were only disclosed for half of the articles (26/52, 50%), and only three papers presented results for different cut-off values/settings . Likewise, 39 classification studies did not assess calibration. When evaluated (13/52, 25%), calibration was illustrated graphically in five studies (9.62%) , via Brier Score in three documents (5.77%) , using both approaches in four papers (7.69%) , and with mean absolute error (MAE) in one report . Lastly, the models’ processing time was also seldomly revealed, with only seven studies reporting it . For the regression-based algorithms, discriminative performance was assessed via C-index . Regarding calibration, the model’s Brier Score was presented in one study , calibration plots in two , both metrics in one , and none in two . The models’ processing time and decision thresholds were not reported in any of these studies. Clinical utility From the selected studies, the majority ( n = 50, 89.29%) explicitly mentions the assessment of the models' clinical utility, that is, its relevance to clinicians and patient outcomes, in the paper's abstract. However, one only refers to it indirectly (1.79%) , and the remaining five (8.93%) do not state this aspect in their summaries . Two approaches were used to assess the models’ utility: comparison against clinician performance, adopted in most studies (40/56, 71.4%), and benchmarking against established clinical tools (15/56, 26.8%). Additionally, one study used both: retrospective comparisons were performed against routine clinical scores, while prospective assessments involved clinicians (1/56, 1.8%) . Comparison Against Clinicians Four hundred-ninety-nine medical professionals of varying expertise were involved in these studies, with an average of 12 clinicians compared against each model (range = 1 – 109 ). These included endoscopists ( n = 204), oncologists ( n = 77), radiologists ( n = 76), general physicians ( n = 71), dermatologists ( n = 44), pathologists ( n = 21), ophthalmologists ( n = 3), and thoracic surgeons ( n = 3). A subset of 113 115 patients (102 178 female, 9 619 male) was used for these assessments, and identical performance metrics as those documented for external validation were observed, plus time until diagnosis. Specific clinicians’ years of experience were reported in 20 papers (48.8%), ranks (without years) in 11 (26.8%), and no information concerning expertise in 10 (24.4%). The 41 classification studies encompassing model comparison against clinicians can be divided into two designs: with and without the model and independent evaluation of the models and the clinicians. The most commonly adopted technique was separately assessing model and clinician performance and comparing it posteriorly ( n = 30, 73.2%). Four hundred-one clinicians (μ = 15 per report, range = 1 – 109) and 109 720 patients (μ = 3 657 per paper, 100 965 female, 8 203 male ) were involved in these papers, and model-clinician performance was compared for detection and diagnostic capabilities. An average of 4 performance metrics (range = 1 – 7 ) were computed per paper, with sensitivity being the most calculated ( n = 23), followed by specificity ( n = 18) and accuracy ( n = 15), AUC ( n = 11), PPV ( n = 11), NPV ( n = 7), F1-score ( n = 3) , false positive rate ( n = 2) , Sweetspot and Softsoft flags ( n = 2) , diagnostic time ( n = 1) , and AUAFROC ( n = 1) , and JAFROC ( n = 1) . The second approach involved comparing clinician performance with and without the assistance of the artificially intelligent systems developed by the authors ( n = 11, 26.8%). The eleven studies employing this method comprised 92 clinicians (μ = 8, minimum = 1, maximum = 20 ) and 3 337 patients (μ = 370, 1 223 female, 1 416 male ). Similarly to the previous technique, an average of 4 performance metrics were used per paper (range = 1 – 6 ), including sensitivity ( n = 9), specificity ( n = 8), accuracy ( n = 8), PPV ( n = 6), NPV ( n = 5), AUC ( n = 2) , mean diagnostic time ( n = 2) , and error rate ( n = 1) . Comparison Against Standard/Established Clinical Tools In sixteen studies, assessing the usefulness of the models involved comparing their performance against well-established and routinely used clinical tools. In total, 11 659 patients (μ = 777 per paper, 4 521 female, 5 694 male ) were encompassed in these assessments, and twelve standard tools were used for comparisons. These included: (i) the 7th and 8th editions of the Tumor, Node, and Metastasis (TNM) staging system; (ii) the Brock University Model; (iii) the Fracture Risk Assessment Tool (FRAX); (iv) the Liver Cancer Study Group of Japan (LCSGJ); (v) the Mayo clinic model; (vi) the modified Glasgow Prognostic Score (mGPS); (vii) the Osteoporosis Self-Assessment Tool for Asians (OSTA); (viii) the second version of the Prostate Imaging Reporting and Data System (PI-RADS v2); (ix) the Peking University (PKU) model; (x) the PLCOm2012 model; (iv) the Response Evaluation Criteria in Solid Tumors (RECIST); (xi) the Veterans Affairs (VA) model; and (xii) the World Health Organization (WHO) performance status. Except for one study , all papers explicitly mention comparisons against these tools in the abstract. The TNM system, created by the American Joint Committee on Cancer (AJCC), is globally used in routine clinical procedures. It categorizes cancer progression and guides subsequent treatment decisions depending on (i) the size and extent of the primary tumor (T), (ii) if it has spread to nearby lymph nodes (N), and (iii) if it has metastasized to distant organs (M) . In this review, two text-based classification studies compared their models against the 7th edition of this staging system (TNM-7): one juxtaposed diagnostic and prognostic (3-year overall survival) predictions for bone metastasis in kidney cancer patients (323 women, 640 men) , while the other compared 1–10-year postoperative survival predictions for patients with colorectal cancer (607 women, 965 men) . Similarly, seven papers resorted to the 8th edition of AJCC TMN (TNM-8), its revised and updated version. On the one hand, in four articles, the models were only compared against this system. Two analyzed their text- and regression-based models to predict cancer-specific survival for esophageal (500 patients, 150 women, 350 men) and lung tumors (1 182 individuals, 642 female, 540 male) . The other two concerned the evaluation of classification models. Using preoperative images and descriptive data, one compared 2-year overall survival and 1-year recurrence-free survival predictions for patients with pancreatic cancer (27 female, 26 male) . The other compared risk stratification performance for overall survival for lung cancer patients (39 women, 133 men) between their model and the TMN-8 system using only text-based data . On the other hand, in three text-based studies, models were compared against TNM-8 and other tools. One paper also contrasted model performance for recurrence, recurrence-free survival, and overall survival for lung cancer patients (71 women, 88 men) with the WHO performance status, often used in oncology to determine patients' overall health status, prognosis, and the ability to tolerate treatment . This scaling system ranges from 0 to 4, where 0 represents no symptoms and pre-disease performance, and 4 translates to total disability. In the second article, predictions of overall postoperative survival were benchmarked against TNM-8 and LCSGJ (42 liver cancer patients, 12 women, 30 men) . LCSGJ is a group of Japanese medical professionals specializing in diagnosing and treating liver cancer, recognized as a leading authority in this cancer research field. Lastly, the third study describes the development of three risk models for breast cancer patients (150 women) : (i) fracture, whose predictions were contrasted with those generated by FRAX; (ii) osteoporosis, compared against and FRAX and OSTA; (iii) and survival, benchmarked against TNM-8. FRAX is a web-based tool designed to stratify 10-year bone fracture risk, and OSTA assesses the risk of osteoporosis in Asian populations . The Brock University (also known as PanCan) model is a logistic regression model devised to assist in risk stratification for lung cancer. It is recommended in the British Thoracic Society guideline as a tool to decide if nodules measuring 8 mm or more in maximum diameter should be assessed further with PET-CT . Here, it was applied in one of the selected papers to compare predictions of malignancy risk for lung cancer from CECT and NECT scans (1 397 images, 1187 patients, unknown gender proportion) . In addition to the Brock Model, comparisons in a second paper (978 CTs, 493 patients, 297 women, 196 men) were also performed against three other tools: (i) the Mayo model, which the Mayo Clinic developed to assess cancer prognosis and predict patient outcomes; (ii) the PKU model, created by the Peking University; and (iii) the VA model, which includes a comprehensive cancer care system that aims to provide high-quality, evidence-based care to veterans with cancer . The mGPS scale is a validated scoring system formulated to assess the prognosis of patients with advanced or metastatic cancer based on nutritional and inflammatory markers . In this review, it was used to establish clinical utility for a text-based classification model developed to predict overall survival for patients with unresectable pancreatic tumors (22 patients, 8 women, 14 men) . PI-RADS is a standardized system for interpreting and reporting findings from prostate MRI scans, created to guide clinical decision-making in diagnosing and treating prostate cancer. In this context, it was contrasted against a model developed to stratify low- and high-risk patients (39 and 14 men, respectively) . PLCOm2012 is a validated risk score that uses logistic regression to predict the probability of lung cancer occurrence within six years based on demographic and clinical information . It was the chosen comparator in a study predicting 12-year lung cancer incidence using low-dose CT images and patients’ age, sex, and smoking status (5493 images and patients, 2456 women, 3037 men) . Finally, RECIST is a set of guidelines used to evaluate the response of solid tumors to treatment in clinical trials and clinical practice. It was compared against two classification models: one aimed at detecting pathological downstaging in advanced gastric cancer patients from CECT images (86 patients and images, 23 women, 27 men) ; the other was designed to predict pathological tumor regression grade response to neoadjuvant chemotherapy in patients with colorectal liver metastases from MRI scans (61 images, 25 patients, 13 female, 12 male) . A few performance metrics were reported for the comparisons between the models developed in the selected papers and routinely used clinical tools, with an average of 3 metrics reported per document (range = 1 – 6). Here, the most frequently calculated metrics were AUC ( n = 11) and sensitivity ( n = 8), but PPV ( n = 5), C-index ( n = 4), specificity ( n = 4), accuracy ( n = 3), NPV ( n = 3), Brier Score ( n = 2) and F1-score ( n = 1) were also used in the evaluations. As depicted in Fig. A, the included articles were retrieved from 31 journals with an average SJR (2021) of 2.496, from a minimum of 1.005 ( Scientific Reports ) and a maximum of 7.689 ( Gastroenterology ). Frontiers in Oncology was the most common source ( n = 9, 16.07%, SJR = 1.291), followed by eBioMedicine ( n = 6, 10.71%, SJR = 2.9) and European Radiology ( n = 5, 8.93%, SJR = 1.73) . Eight (25.8%) of these journals were primarily dedicated to methodological issues and computational methods within artificial intelligence (dashed bars in Fig. A), while the remaining twenty-three (74.2%) focused on medical applications and patient-related topics. Concerning the year of publication, although citations since 2014 were screened, only papers from 2018 and onwards met the inclusion criteria. The number of reports increased substantially after 2020, with 23% ( n = 13), 27% ( n = 15), and 43% ( n = 24) of the sources being from 2020, 2021, and 2022, respectively, versus 2% ( n = 1) in 2018 and 5% ( n = 3) in 2019 (Fig. B). While the majority did not adhere to any reporting guidelines ( n = 48, 85.714%), 3 (5.357% ) used TRIPOD , 3 (5.357% ) followed STARD 2015 (commonly used for diagnostic and prognostic studies) , and 2 used CONSORT-AI and STROBE (1 each, 1.786%, and , respectively). Lastly, caveats were not reported for a small percentage of studies (7.14%, n = 4) . The features of the machine learning algorithms found in the included articles are detailed in Table . Sixty-two models were described in the 56 documents, with 55.4% (31/56) of the authors explicitly mentioning which algorithms were used in the paper's abstract. Most developers opted for an ensemble approach ( n = 27, 48.2%), 26 (46.4%) for single models, and three (5.4%) for both . Of the selected studies, 50 (89.3%) were exclusively devoted to classification, 4 to regression (7.1%) , and 2 developed both types of models (3.6%) . All models were supervised except in one study (semi-supervised) , and 50% of the researchers ( n = 28) compared their systems against other ML algorithms. Apart from work developed in , where the model was silently integrated into the patients' EHRs, all models were deployed as standalone systems. Overall, 30 (53.6%) can be classified as CADx, 19 (33.9%) as CDSS, 2 (3.6%) as CADe , and 5 as both CADe and CADx (8.9%) . Regarding interfaces, most tools were desktop-based ( n = 46, 82.1%), and 10 (17.9%) were deployed as web-based applications . All websites were reported, 43 articles (76.79%) disclosed which software was used, and codes were provided for 11 models (19.6%) . Most studies were deep-learning based ( n = 36, 64.3%). From these, the most frequently reported models were Convolutional Neural Networks (CNNs), either alone (29/36, 80.55%), coupled with a Recurrent Neural Network (RNN, 3/36, 8.34%) , or with Logistic Regression (LR), a shallow ANN, Gradient Boosting (GB), a Support Vector Machine (SVM), and Random Forest (RF, 1/36, 2.78%) . Specific CNN architectures were reported for approximately 76% of the articles (25/33), which, as shown in Fig. , primarily consisted of ResNet- ( n = 9, 36%) and DenseNet-based frameworks ( n = 8, 32%), used individually or in conjunction. To overcome data scarcity, transfer learning was used in 16 of the 33 CNN-based articles 48.5%), which involves pre-training the network on a specific problem and transferring that base knowledge to a new, related task (see Table : pre-trained in column General Focus and Models ). Besides CNNs, other DL algorithms were described in four articles . Multilayer Perceptrons (MLPs) were used in three (5.56%) , two of which applied a DeepSurv architecture, a deep Cox proportional hazards feed-forward neural network . The last (2.78%) involved a neural multitask logistic regression model (N-MTLR) . The remaining documents ( n = 20, 35.7%) described a non-deep-learning-based workflow encompassing fifteen unique algorithms applied in twenty-eight configurations. From these, boosting-based techniques were the most widely reported, consisting of eXtreme Gradient Boosting (XGBoost, 6/28, 21.43%) , a Light Gradient Boosting Machine (LightGBM, 1/28, 3.57%) , LogitBoost (1/28, 3.57%) , Adaptive Boosting (AdaBoost, 1/28, 3.57%) , and Gradient-Boosted Decision Trees (GBDT, 2/28, 7.14%) . Other decision tree designs were also used, including RF (6/28, 21.43%) and extremely randomized trees (ExtraTrees, 1/28, 3.57%) . The third most reported group of algorithms were SVMs , a Support Vector Classifier (SVC) , and a Quadratic SVM (4/28, 14.28%), followed by shallow ANNs (2/28, 7.14%) and LR (1/28, 3.57%) . Lastly, Mixture Discriminant Analysis (MDA), k-nearest Neighbors (kNNs), and naïve Bayes (NB) were also found, all used in the same article (total of 3/28, 10.71%) . Regarding general cancer types, the selected papers can be broadly divided into two categories: those concentrating on primary tumors and those mainly examining metastasized (secondary) cancers. Most articles focused on primary tumors (51/56, 91.1%), although four also included metastases . These cancers can be further branched into the specific system where the malignancy was formed: (i) central nervous system (CNS), including the brain (3/51, 5.88%) ; (ii) digestive system, encompassing colorectal (7/51, 13.73%) , esophageal (3/51, 5.88%), gastric (5/51, 9.8%) , and liver cancers (2/51, 3.92%) ; (iii) endocrine system, involving cancers of the pancreas (2/51, 3.92%) and thymus (1/51, 1.96%) ; (iv) genitourinary system, consisting of bladder (1/51, 1.96%) , cervical (1/51, 1.96%) , prostate (2/51, 3.92%) , and endometrial (2/51, 3.92%) cancers; (v) integumentary system, with tumors of the breast (4/51, 7.84%) and skin (2/51, 3.92%); (vi) respiratory system, studying neoplasms of the larynx (1/51, 1.96%) , lung (10/51, 19.61%) , mesothelium (1/51, 1.96%) , and nasopharynx (1/51, 1.96%) ; and (vii) the skeletal system, comprising the bones (4/51, 7.84%) . In addition, five papers analyzed metastatic cancers (5/56, 8.9%), which can also be bifurcated into malignancies spread to nodes or organs. The former includes solid metastatic breast, lung, and gastrointestinal and genitourinary tract tumors , bone metastases in kidney cancer patients , and liver metastases from colorectal cancers . The latter encompasses thyroid cancer spread to lymph nodes and sentinel lymph node metastasis from primary breast lesions . Seventy-six cancer-related goals were addressed in the 56 documents, with an average of one task performed per paper and a maximum of three . These included the development or improvement of systems for: (i) diagnosis alone ( n = 28, 50%) or combined with detection ( n = 5, 8.93%) or prognosis ( n = 1, 1.79%) ; (ii) detection by itself ( n = 2, 3.58%) or coupled with outcome prediction ( n = 1, 1.79%) ; and (iii) outcome prediction, including prognosis ( n = 16, 28.58%) ; and risk stratification ( n = 3, 5.36%) . Finally, fifteen studies resorted to explainable AI (XAI) to increase the transparency behind the models' decisions. Unlike black-box methods, whose reasoning is indecipherable, XAI allows the creation of interpretable models to determine how each prediction was reached and which clinical predictors bore the most weight. Three packages were used for this purpose: (i) SHapley Additive exPlanations (SHAP), which can be employed in any ML algorithm ( n = 6, 40%) ; and (ii) Class Activation Mapping (CAM, n = 1, 6.67%) and Gradient-weighted CAM (Grad-CAM, n = 8, 53.33%) , explicitly developed for CNNs. According to the clinical variables used as input, the models validated in the 56 studies can be divided into three types: image-based (including video, n = 37, 66.1%), text-based ( n = 10, 17.9%), and mixed, using both clinical modalities ( n = 9, 16.1%). Image-based Studies A total of 335 085 high-resolution images from 112 538 patients (102 117 female, 8 215 male ) were used for classification in 36 of the 37 image-based studies and for classification (recurrence) and regression (recurrence-free survival) in the last study . Except for one paper including both pediatric and adult patients (unknown age proportion, 175 female, 116 male) and two other articles not listing the patients’ age group (698 in , unknown in , unidentified male–female ratio in both), all studies consisted of adults (111 469 patients, 101 942 women, 8 099 men). Eight studies (21.6%) extracted radiomic features from the retrieved images . The studies encompassed X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography – Computed Tomography (PET-CT) scans, endoscopic images and videos, photographs, ultrasounds, histological slides, and whole-slide images (WSI). Besides digital pictures, which are limited to the surface, these imaging techniques capture the body's internal structures. However, they differ in the way they create images and the type of information they provide. X-rays use and expose the patient to ionizing radiation to create scans . Although time- and cost-effective, these do not provide as much detail as CT or MRI scans. In this review, two studies used radiographic images (2/37, 5.4%) to: (i) classify pathologically-confirmed primary bone tumors in children and adults (639 radiographs, 175 female, 116 male) ; and (ii) for breast cancer screening in adult women ( n = 1, 213 694 X-rays, 92 585 women) . CT scans combine X-rays from different angles to create high-quality, three-dimensional images. Nevertheless, since they are generated from controlled motions of X-rays, CTs are still unfit for extracting molecular information . Furthermore, these scans subject the patient to higher radiation levels than X-rays and may require contrast agents depending on the adopted modality – contrast-enhanced CTs (CECTs) versus non-contrast CTs (NECTs). CT scans were commonly collected variables in the selected articles (8/37, 21.6%), amounting to 7 540 images from: (i) the lungs ( n = 4, 2 323 nodules, 2 113 patients) ; (ii) gastric cancers ( n = 2, 1 129 images, 352 women, 777 men) ; (iii) cervical lymph nodes ( n = 1, 3 838 images, 698 patients of unknown gender) ; and (iv) hepatic metastasis from colorectal cancer ( n = 1, 250 lesions, 31 women, 54 men) . MRI scans do not depend on radiation and use a strong magnetic field and radio waves to create detailed images. This type of imaging can be separated into two subtypes: conventional and advanced . Conventional MRI (cMRI) sequences include standard MRI protocols commonly used in clinical practice, such as (i) T1-weighted: used to identify structural abnormalities; (ii) Axial fluid-attenuated inversion recovery MRI (FLAIR), applied to identify abnormalities that affect the tissues' water content; and (iii) T2-weighted: also appropriate to assess irregularities in water content. Advanced MRI (advMRI) techniques generate deeper information regarding the tissue's function, structure, and metabolic processes, including: (i) multiparametric MRI (mpMRI), which combine several other MRI sequences to enrich its output; (ii) axial diffusion-weighted (DWI) MRI, which measure the movement of water molecules in tissues; (iii) Vascular architecture mapping (VAM) MRI, providing information about the tissue's blood vessels; (iv) Gradient echo dynamic susceptibility contrast (DSC) MRI, used to measure blood movement; (v) Quantitative blood-oxygenation-level-dependent (qBOLD) MRI, able to measure the oxygen content in the blood; (vi) General Electric-Dynamic Susceptibility Contrast (GE-DSC) MRI, which resorts to a contrast agent to measure blood flow; and (vii) Magnetic resonance spectroscopy (MRS), which calculate the levels of certain chemicals and metabolites in the tissues. Although some types of MRIs – such as MR spectroscopy and diffusion-weighted imaging – allow assessing molecular details without contrasts, most are better equipped to analyze gross internal structures and are more expensive than CTs and X-rays . MRI scans were also frequently used as input for the models, with 64 941 combined images from 8 studies (21.6%), including (i) the brain ( n = 3, 64 459 lesions, 623 women, 461 men) ; (ii) the prostate ( n = 2, 262 nodules, 300 men) ; (iii) colorectal malignancies ( n = 2, 154 images, 54 women, 64 men) ; and (iv) bones and cartilages ( n = 1, 65 scans, 34 women, 31 men) . PET scans, which are also radiation-free, allow for examining the internal body structure and underlying molecular tissues. However, these are extremely expensive, usually unavailable in routine practice, and due to their low spatial resolution, require pairing with a second modality, such as CTs and MRIs . In this review, one study (2.7%) used PET-CT scans to examine atypical cartilaginous tumors and appendicular chondrosarcomas (36 scans, 23 women, 13 men) . Similarly to X-rays, ultrasounds – which use high-frequency sound waves to create images – provide an inexpensive method to inspect organ structures without detailing underlying molecular information, with the upside of not involving radiation . Ultrasonographic imaging was mentioned in 2 articles ( n = 2, 5.4%, 328), which studied breast cancers (116 ultrasounds, 107 women) . Eight reports describe images captured with standard endoscopes ( n = 8, 24.3%, 3681 items), which cannot capture molecular features. Four studies used colonoscopic lesions from the colon and rectum (995 images, 105 women, 224 men) . Four studies analyzed endoscopic pictures of the esophagus ( n = 2, 260 images, 260 patients of unknown gender) , the larynx ( n = 1, 1 176 images, unknown number of patients) , and the nasopharynx ( n = 1, 1 430 images, 124 women, 231 men) . Lastly, one study examined endoscopic videos from intramucosal gastric cancer patients (54 videos, 38 women, 16 men) . Two studies used advanced endoscopes. One involved endoscopic ultrasonography (EUS), a technique that combines endoscopy and ultrasonography to gather gastrointestinal images ( n = 1, 2.7%, 212 ultrasounds, 38 women, 31 men) . The other resorted to endocytoscopy, a relatively new high-magnification imaging approach that allows tissue analysis at a cellular level, to collect 100 colorectal images from 89 patients ( n = 1, 2.7%, 26 women, 63 men) . A histological image is a high-resolution, microscopic image of a tissue slide after it's been processed with one or more stains to reveal its composition . This method allows distinguishing between different histological cancer subtypes but involves a long preparation time and offers a limited depth of view. One paper used hematoxylin-and-eosin (H&E)-stained histological images to study endometrium hyperplasia and intraepithelial neoplasia ( n = 1, 2.7%, 1 631 slides, 102 women) . Whole-slide images (WSIs) are virtual representations of a tissue section scanned at high resolution and magnification. WSIs are created by scanning stained histological slides and usually combine and magnify multiple slides using specialized software . This technique allows thorough tissue examination at cellular and sub-cellular levels, but it is still cost-, storage- and technically heavy. WSIs were used to feed the models in three studies (8.1%, 3 315 images), using 30 × or 40 × magnification. Two included H&E stained slides of the liver ( n = 1, 80 slides, 24 women, 56 men) and the mesothelium ( n = 1, 39 images, 39 patients of unreported gender) . One was composed of stained slides (unknown stain) for the cervical screening of women without any known conditions and with the Human papillomavirus (HPV) ( n = 1, 1565 images and women) . Finally, 46 962 digital photographs (captured with a camera) were analyzed across two documents (5.4%). Both inspected skin malignancies ( n = 2, 10 602 patients). Detailed information regarding the samples, type of CTs, MRIs, and endoscopes used in the image-based studies, as well as population details and counts (age group, total patients, female, and male), is itemized in Table . Text-based Studies The populations and specific clinical variables used in each text-based study are compiled in Table . Clinical data from 6 803 patients (2 772 women, 4 031 men, 7 861 encounters) was collected for validation across ten papers . Apart from one work including senior citizens , all studies consisted of adult patients (6 644 subjects, 2 701 women, 3 943 men). An average of 17 clinical variables was used per study (range = 6 – 31 ), encompassing information on demographics, tumoral values, and laboratory test results. The machine learning models used in 6 of the articles (60%) were exclusively developed for classification (1 960 women, 3 097 men) , while 4 (40%) solely concerned regression (812 women, 934 men) . In the four regression-based articles, the developed prognostic models assessed (i) patients with a single lesion of primary stage I to IV esophageal adenocarcinoma or squamous cell carcinoma ( n = 1, 150 women, 350 men) ; (ii) patients with pathologically confirmed and resected intrahepatic cholangiocarcinoma (12 women, 30 men) ; (iii) patients with stage I to III non-small cell lung cancer (642 women, 540 men) ; and (iv) patients in palliative care with unresectable advanced pancreatic ductal adenocarcinoma with liver metastases (8 women, 14 men) . The six classification papers included: (i) seniors with stage I to III non-small cell lung cancer treated with curative-intent radiotherapy (159 individuals, 71 women, 88 men) ; (ii) bone metastasis in kidney cancer patients with complete survival data (323 women, 640 men) ; (iii) women with primary breast cancer diagnosed by pathological examination (150 women) ; (iv) patients with primary colorectal cancer with survival-related data who underwent surgery (1 572 patients, 607 female, 965 male) ; (v) patients with confirmed stage III non-small cell lung cancer (39 women, 133 men) ; and (vi) patients with solid metastatic tumors for several types of cancer with and without alterations in treatment in an outpatient setting (3 099 encounters, 2 041 individuals, 770 women, 1 271 men) . Mixed Studies An average of 9 clinical variables (range = 3 – 17 ), 784 images, and 720 patients (range = 44 – 5 493 for both) were used in the nine mixed studies, whose information is highlighted in Table . These papers combined patients’ demographics, cancer-specific data, laboratory results, and imaging features extracted from different modalities for cancer-specific populations (7 053 images, 6 482 patients, 3 009 women, 3 478 men). Radiomics approaches were used in three studies . Six reports included CT images to study: (i) patients who underwent curative-intent resection for pancreatic ductal adenocarcinoma ( n = 1, 53 images, 27 women, 26 men) ; (ii) patients with benign and malignant pulmonary ground-glass nodules with less than 30 mm ( n = 1, 63 images, 39 women, 22 men) ; (iii) individuals with multiple lungs nodes in a post-operative setting ( n = 1, 200 images, 51 women, 27 men) ; (iv) lung cancer patients with an available baseline radiograph ( n = 1, 5 493 patients and images, 2456 women, 3037 men) ; (v) patients with muscle-invasive bladder cancer who underwent surgery ( n = 1, 75 images, 13 women, 62 men) ; and (vi) adults with pathologically confirmed thymomas and thymic carcinomas ( n = 1, 76 preoperative scans, 33 women, 48 men) . Additionally, three studies used other types of scans. One work paired breast-specific data with features derived from three types of MRI scans for women with endometrial lesions and complete clinical data (44 images, 44 women) . One paper combined patients’ age, sex, tumor type, location, and radiomic features extracted from X-rays to analyze primary bone tumors (40 women, 56 men) . Finally, one study evaluated survival- and gross-tumor-related data in conjunction with H&E slides magnified 30 times (whole-slide images) to estimate outcomes for patients diagnosed with gastric cancer (175 images, 91 patients, 60 female, 31 male) . Except for the models developed in this study, where the first used only WSIs for classification and the second used these images and clinical data for prognostication (regression), all algorithms were classifiers. A total of 335 085 high-resolution images from 112 538 patients (102 117 female, 8 215 male ) were used for classification in 36 of the 37 image-based studies and for classification (recurrence) and regression (recurrence-free survival) in the last study . Except for one paper including both pediatric and adult patients (unknown age proportion, 175 female, 116 male) and two other articles not listing the patients’ age group (698 in , unknown in , unidentified male–female ratio in both), all studies consisted of adults (111 469 patients, 101 942 women, 8 099 men). Eight studies (21.6%) extracted radiomic features from the retrieved images . The studies encompassed X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography – Computed Tomography (PET-CT) scans, endoscopic images and videos, photographs, ultrasounds, histological slides, and whole-slide images (WSI). Besides digital pictures, which are limited to the surface, these imaging techniques capture the body's internal structures. However, they differ in the way they create images and the type of information they provide. X-rays use and expose the patient to ionizing radiation to create scans . Although time- and cost-effective, these do not provide as much detail as CT or MRI scans. In this review, two studies used radiographic images (2/37, 5.4%) to: (i) classify pathologically-confirmed primary bone tumors in children and adults (639 radiographs, 175 female, 116 male) ; and (ii) for breast cancer screening in adult women ( n = 1, 213 694 X-rays, 92 585 women) . CT scans combine X-rays from different angles to create high-quality, three-dimensional images. Nevertheless, since they are generated from controlled motions of X-rays, CTs are still unfit for extracting molecular information . Furthermore, these scans subject the patient to higher radiation levels than X-rays and may require contrast agents depending on the adopted modality – contrast-enhanced CTs (CECTs) versus non-contrast CTs (NECTs). CT scans were commonly collected variables in the selected articles (8/37, 21.6%), amounting to 7 540 images from: (i) the lungs ( n = 4, 2 323 nodules, 2 113 patients) ; (ii) gastric cancers ( n = 2, 1 129 images, 352 women, 777 men) ; (iii) cervical lymph nodes ( n = 1, 3 838 images, 698 patients of unknown gender) ; and (iv) hepatic metastasis from colorectal cancer ( n = 1, 250 lesions, 31 women, 54 men) . MRI scans do not depend on radiation and use a strong magnetic field and radio waves to create detailed images. This type of imaging can be separated into two subtypes: conventional and advanced . Conventional MRI (cMRI) sequences include standard MRI protocols commonly used in clinical practice, such as (i) T1-weighted: used to identify structural abnormalities; (ii) Axial fluid-attenuated inversion recovery MRI (FLAIR), applied to identify abnormalities that affect the tissues' water content; and (iii) T2-weighted: also appropriate to assess irregularities in water content. Advanced MRI (advMRI) techniques generate deeper information regarding the tissue's function, structure, and metabolic processes, including: (i) multiparametric MRI (mpMRI), which combine several other MRI sequences to enrich its output; (ii) axial diffusion-weighted (DWI) MRI, which measure the movement of water molecules in tissues; (iii) Vascular architecture mapping (VAM) MRI, providing information about the tissue's blood vessels; (iv) Gradient echo dynamic susceptibility contrast (DSC) MRI, used to measure blood movement; (v) Quantitative blood-oxygenation-level-dependent (qBOLD) MRI, able to measure the oxygen content in the blood; (vi) General Electric-Dynamic Susceptibility Contrast (GE-DSC) MRI, which resorts to a contrast agent to measure blood flow; and (vii) Magnetic resonance spectroscopy (MRS), which calculate the levels of certain chemicals and metabolites in the tissues. Although some types of MRIs – such as MR spectroscopy and diffusion-weighted imaging – allow assessing molecular details without contrasts, most are better equipped to analyze gross internal structures and are more expensive than CTs and X-rays . MRI scans were also frequently used as input for the models, with 64 941 combined images from 8 studies (21.6%), including (i) the brain ( n = 3, 64 459 lesions, 623 women, 461 men) ; (ii) the prostate ( n = 2, 262 nodules, 300 men) ; (iii) colorectal malignancies ( n = 2, 154 images, 54 women, 64 men) ; and (iv) bones and cartilages ( n = 1, 65 scans, 34 women, 31 men) . PET scans, which are also radiation-free, allow for examining the internal body structure and underlying molecular tissues. However, these are extremely expensive, usually unavailable in routine practice, and due to their low spatial resolution, require pairing with a second modality, such as CTs and MRIs . In this review, one study (2.7%) used PET-CT scans to examine atypical cartilaginous tumors and appendicular chondrosarcomas (36 scans, 23 women, 13 men) . Similarly to X-rays, ultrasounds – which use high-frequency sound waves to create images – provide an inexpensive method to inspect organ structures without detailing underlying molecular information, with the upside of not involving radiation . Ultrasonographic imaging was mentioned in 2 articles ( n = 2, 5.4%, 328), which studied breast cancers (116 ultrasounds, 107 women) . Eight reports describe images captured with standard endoscopes ( n = 8, 24.3%, 3681 items), which cannot capture molecular features. Four studies used colonoscopic lesions from the colon and rectum (995 images, 105 women, 224 men) . Four studies analyzed endoscopic pictures of the esophagus ( n = 2, 260 images, 260 patients of unknown gender) , the larynx ( n = 1, 1 176 images, unknown number of patients) , and the nasopharynx ( n = 1, 1 430 images, 124 women, 231 men) . Lastly, one study examined endoscopic videos from intramucosal gastric cancer patients (54 videos, 38 women, 16 men) . Two studies used advanced endoscopes. One involved endoscopic ultrasonography (EUS), a technique that combines endoscopy and ultrasonography to gather gastrointestinal images ( n = 1, 2.7%, 212 ultrasounds, 38 women, 31 men) . The other resorted to endocytoscopy, a relatively new high-magnification imaging approach that allows tissue analysis at a cellular level, to collect 100 colorectal images from 89 patients ( n = 1, 2.7%, 26 women, 63 men) . A histological image is a high-resolution, microscopic image of a tissue slide after it's been processed with one or more stains to reveal its composition . This method allows distinguishing between different histological cancer subtypes but involves a long preparation time and offers a limited depth of view. One paper used hematoxylin-and-eosin (H&E)-stained histological images to study endometrium hyperplasia and intraepithelial neoplasia ( n = 1, 2.7%, 1 631 slides, 102 women) . Whole-slide images (WSIs) are virtual representations of a tissue section scanned at high resolution and magnification. WSIs are created by scanning stained histological slides and usually combine and magnify multiple slides using specialized software . This technique allows thorough tissue examination at cellular and sub-cellular levels, but it is still cost-, storage- and technically heavy. WSIs were used to feed the models in three studies (8.1%, 3 315 images), using 30 × or 40 × magnification. Two included H&E stained slides of the liver ( n = 1, 80 slides, 24 women, 56 men) and the mesothelium ( n = 1, 39 images, 39 patients of unreported gender) . One was composed of stained slides (unknown stain) for the cervical screening of women without any known conditions and with the Human papillomavirus (HPV) ( n = 1, 1565 images and women) . Finally, 46 962 digital photographs (captured with a camera) were analyzed across two documents (5.4%). Both inspected skin malignancies ( n = 2, 10 602 patients). Detailed information regarding the samples, type of CTs, MRIs, and endoscopes used in the image-based studies, as well as population details and counts (age group, total patients, female, and male), is itemized in Table . The populations and specific clinical variables used in each text-based study are compiled in Table . Clinical data from 6 803 patients (2 772 women, 4 031 men, 7 861 encounters) was collected for validation across ten papers . Apart from one work including senior citizens , all studies consisted of adult patients (6 644 subjects, 2 701 women, 3 943 men). An average of 17 clinical variables was used per study (range = 6 – 31 ), encompassing information on demographics, tumoral values, and laboratory test results. The machine learning models used in 6 of the articles (60%) were exclusively developed for classification (1 960 women, 3 097 men) , while 4 (40%) solely concerned regression (812 women, 934 men) . In the four regression-based articles, the developed prognostic models assessed (i) patients with a single lesion of primary stage I to IV esophageal adenocarcinoma or squamous cell carcinoma ( n = 1, 150 women, 350 men) ; (ii) patients with pathologically confirmed and resected intrahepatic cholangiocarcinoma (12 women, 30 men) ; (iii) patients with stage I to III non-small cell lung cancer (642 women, 540 men) ; and (iv) patients in palliative care with unresectable advanced pancreatic ductal adenocarcinoma with liver metastases (8 women, 14 men) . The six classification papers included: (i) seniors with stage I to III non-small cell lung cancer treated with curative-intent radiotherapy (159 individuals, 71 women, 88 men) ; (ii) bone metastasis in kidney cancer patients with complete survival data (323 women, 640 men) ; (iii) women with primary breast cancer diagnosed by pathological examination (150 women) ; (iv) patients with primary colorectal cancer with survival-related data who underwent surgery (1 572 patients, 607 female, 965 male) ; (v) patients with confirmed stage III non-small cell lung cancer (39 women, 133 men) ; and (vi) patients with solid metastatic tumors for several types of cancer with and without alterations in treatment in an outpatient setting (3 099 encounters, 2 041 individuals, 770 women, 1 271 men) . An average of 9 clinical variables (range = 3 – 17 ), 784 images, and 720 patients (range = 44 – 5 493 for both) were used in the nine mixed studies, whose information is highlighted in Table . These papers combined patients’ demographics, cancer-specific data, laboratory results, and imaging features extracted from different modalities for cancer-specific populations (7 053 images, 6 482 patients, 3 009 women, 3 478 men). Radiomics approaches were used in three studies . Six reports included CT images to study: (i) patients who underwent curative-intent resection for pancreatic ductal adenocarcinoma ( n = 1, 53 images, 27 women, 26 men) ; (ii) patients with benign and malignant pulmonary ground-glass nodules with less than 30 mm ( n = 1, 63 images, 39 women, 22 men) ; (iii) individuals with multiple lungs nodes in a post-operative setting ( n = 1, 200 images, 51 women, 27 men) ; (iv) lung cancer patients with an available baseline radiograph ( n = 1, 5 493 patients and images, 2456 women, 3037 men) ; (v) patients with muscle-invasive bladder cancer who underwent surgery ( n = 1, 75 images, 13 women, 62 men) ; and (vi) adults with pathologically confirmed thymomas and thymic carcinomas ( n = 1, 76 preoperative scans, 33 women, 48 men) . Additionally, three studies used other types of scans. One work paired breast-specific data with features derived from three types of MRI scans for women with endometrial lesions and complete clinical data (44 images, 44 women) . One paper combined patients’ age, sex, tumor type, location, and radiomic features extracted from X-rays to analyze primary bone tumors (40 women, 56 men) . Finally, one study evaluated survival- and gross-tumor-related data in conjunction with H&E slides magnified 30 times (whole-slide images) to estimate outcomes for patients diagnosed with gastric cancer (175 images, 91 patients, 60 female, 31 male) . Except for the models developed in this study, where the first used only WSIs for classification and the second used these images and clinical data for prognostication (regression), all algorithms were classifiers. Information concerning institutional, study, and validation designs, care types, datasets, clinical settings, and the number of institutions involved in validation in the selected documents is illustrated in Table . Model development and validation were performed simultaneously in most studies ( n = 50, 87.5%), while 4 (7.14%) evaluated external validity separately, and 3 (5.36%) entailed model updating and validation. Of the 56 documents included in this review, 44 (78.57%) directly reference external validation in the abstract, 10 (17.86%) indirectly mention it, and 2 (3.57%) omit this information. Overall, 74 medical datasets were used for external validation across the 56 studies, averaging 1.3 per paper (range = 1—8). All studies used real-world data acquired prospectively or collected from the patients' EHRs and imaging archiving platforms. Except for three articles using both standard and uncommon types of MRI scans and one using endocytoscopy (whose use is still growing) , all studies used text- and image-based data routinely collected in clinical practice. However, only nine reports describe external validation in clinically realistic scenarios , and solely two systems are currently implemented in practice . The papers involved several cancer-related settings, including secondary ( n = 1, 2%), tertiary ( n = 34, 61%), and quaternary (12, 21%) oncology care. However, 6 (11%) studies did not report from which centers data were retrieved, and 3 (5%) used databases without this information. Among the collected studies, 49 (87.5%) were conducted retrospectively, 3 (5.36%) were prospective, 4 (7.15%) were mixed: one performed internal validation prospectively and external validation retrospectively , one proceeded inversely , and two used both retrospective and prospective cohorts . Only one report used randomized data . Regarding validation design, 31 (55.357%) studies followed a multi-institutional approach, 14 (25%) collected information from a single center, 1 (1.786%) only used public databases, 2 (3,572%) used public multi-institutional databases, and 8 (14,286%) used both types of sources. For the multi-institutional studies (including databases), the average number of facilities used for validation was 3, with a maximum of 33 . One study did not report the number of institutions involved . The following freely available data sources were used: (i) the Surveillance, Epidemiology, and End Results (SEER) database, which covers population-based cancer registries of approximately 47.8% of the United States Population ; (ii) The Cancer Genome Atlas (TCGA, from the USA), which molecularly characterizes over 20,000 primary cancers, and contains whole-slide images ; (iii) The Cancer Imaging Archive, which hosts a large number of medical images for various types of cancer ; (iv) the Edinburgh dataset, containing data from the University of Edinburgh (Scottland, United Kingdom) ; (v) the Prostate, Lung, Colorectal, and Ovarian (PLCO) randomized trial sponsored by the by the National Cancer Institute (NCI), designed to evaluate the impact of cancer screening on mortality rates, as well as to assess the potential risks and benefits associated with screening ; (vi) the National Lung Screening Trial (NLST), a randomized controlled trial also supported by the NCI that aimed to evaluate the impact of using low-dose helical CT scans on patient mortality ; (vii) the PROSTATEx dataset, which contains a retrospective set of prostate MRI studies ; (viii) the PICTURE dataset, containing data from a single-center trial, and intended to evaluate the diagnostic accuracy of multiparametric magnetic resonance imaging (mpMRI) in men with prostate lesions ; and (ix) the National Human Genetic Resources Sharing Service Platform (NHGRP), for which we could not find any details . In two studies, models were trained using data from multiple countries. One developed their model using patients from three Chinese institutions and one center from the United States of America (USA) and validated it on a Chinese dataset ( n = 1, 1.8%) . The other gathered data from a Chinese institution and TCGA and validated their model on images from NHGRP . Additionally, one document did not report which countries were involved in their model’s development or validation . All other authors developed their model on data from a single country. These included China ( n = 19, 33.7%), the USA ( n = 12, 21.4%), South Korea ( n = 9, 16.1%), Italy and Germany (3 each, 5.4%), Japan and the Netherlands (2 each, 3.6%), and the United Kingdom (UK), Canada, and Austria (1 each, 1.8%). Besides the two abovementioned papers , twelve other studies performed international validation. Of these, six included ethnically different sources. Two authors trained their model with data from South Korea: one validated it on South Korean and American datasets , and the other validated it on a South Korean dataset and the Edinburgh dataset (UK) . Additionally, five reports mention training their model on the SEER database (USA), with four validating it with Chinese patients and one with South Korean patients . For the five remaining studies, patients with the same ethnicity were included: (i) one was developed with the NLST trial dataset (USA) and validated on data from the UK ; (ii) one was trained with data from TCGA (USA) and validated on an institution from the UK ; (iii) one used data from Italy for training and patients from The Netherlands for validation ; (iii) one trained their model on the PROSTATEx dataset (from The Netherlands) and validated it on the PICTURE dataset (from the UK) ; and (iv) one used a Chinese dataset for training and Chinese and South Korean patients for validation . Regarding validation types, 12 studies (21.48%) were limited to temporal validation from a single institution, which cannot be interpreted as a fully independent validation . Five other studies also only temporally validated their model. However, two used a multi-institutional approach (3.58%) , two (3.58%) used different data acquisition designs (retrospective internal validation and prospective external validation) , and one evaluated performance for patients at different treatment stages (1.78%) . Nine studies (16,08%) only validated their model geographically, seven within the same country , one internationally , and one with internationally and ethnically different patients . Twenty-nine reports (51.8%) included both temporal and geographical validation. Sixteen (28.57%) used local data, one evaluated temporally and geographically different patients from the same country with images captured using various scanners , and one (1.79%) used national data and mixed data acquisition (prospective internal validation and retrospective external validation) . Lastly, one study that did not report data sources validated their model on different types of computed tomography (CT) scanners . The external datasets were used to evaluate the models’ generalizability to populations differing – geographically, temporally, or both – from the development cohort. The performance metrics reported in the articles can be branched into three categories: discrimination, calibration, and processing time. For classification models, an average of 5 metrics were used to assess discrimination, up to a maximum of seven (range = 1 – 7). These consisted of (i) sensitivity, reported in 48 papers; (ii) area under the receiver operating characteristic (ROC) curve (AUC), calculated in 43 studies; (iii) specificity, used in 42 articles; (iv) accuracy, presented in 35 documents; (v and vi) positive and negative predictive values (PPV and NPV), computed in 29 and 19 reports, respectively; (vii) F1-score, considered in 13 papers; (viii) C-index, used in 2 articles ; (ix) false positive rate, reported in two papers ; (x) area under the alternative free-response ROC curve (AUAFROC) , calculated for one model; (xi) jackknife alternative free-response ROC (JAFROC), also computed for one algorithm ; and (xii) Softspot (Sos) and Sweetspot (Sws) flags, both used in the same two papers . However, decision thresholds were only disclosed for half of the articles (26/52, 50%), and only three papers presented results for different cut-off values/settings . Likewise, 39 classification studies did not assess calibration. When evaluated (13/52, 25%), calibration was illustrated graphically in five studies (9.62%) , via Brier Score in three documents (5.77%) , using both approaches in four papers (7.69%) , and with mean absolute error (MAE) in one report . Lastly, the models’ processing time was also seldomly revealed, with only seven studies reporting it . For the regression-based algorithms, discriminative performance was assessed via C-index . Regarding calibration, the model’s Brier Score was presented in one study , calibration plots in two , both metrics in one , and none in two . The models’ processing time and decision thresholds were not reported in any of these studies. From the selected studies, the majority ( n = 50, 89.29%) explicitly mentions the assessment of the models' clinical utility, that is, its relevance to clinicians and patient outcomes, in the paper's abstract. However, one only refers to it indirectly (1.79%) , and the remaining five (8.93%) do not state this aspect in their summaries . Two approaches were used to assess the models’ utility: comparison against clinician performance, adopted in most studies (40/56, 71.4%), and benchmarking against established clinical tools (15/56, 26.8%). Additionally, one study used both: retrospective comparisons were performed against routine clinical scores, while prospective assessments involved clinicians (1/56, 1.8%) . Comparison Against Clinicians Four hundred-ninety-nine medical professionals of varying expertise were involved in these studies, with an average of 12 clinicians compared against each model (range = 1 – 109 ). These included endoscopists ( n = 204), oncologists ( n = 77), radiologists ( n = 76), general physicians ( n = 71), dermatologists ( n = 44), pathologists ( n = 21), ophthalmologists ( n = 3), and thoracic surgeons ( n = 3). A subset of 113 115 patients (102 178 female, 9 619 male) was used for these assessments, and identical performance metrics as those documented for external validation were observed, plus time until diagnosis. Specific clinicians’ years of experience were reported in 20 papers (48.8%), ranks (without years) in 11 (26.8%), and no information concerning expertise in 10 (24.4%). The 41 classification studies encompassing model comparison against clinicians can be divided into two designs: with and without the model and independent evaluation of the models and the clinicians. The most commonly adopted technique was separately assessing model and clinician performance and comparing it posteriorly ( n = 30, 73.2%). Four hundred-one clinicians (μ = 15 per report, range = 1 – 109) and 109 720 patients (μ = 3 657 per paper, 100 965 female, 8 203 male ) were involved in these papers, and model-clinician performance was compared for detection and diagnostic capabilities. An average of 4 performance metrics (range = 1 – 7 ) were computed per paper, with sensitivity being the most calculated ( n = 23), followed by specificity ( n = 18) and accuracy ( n = 15), AUC ( n = 11), PPV ( n = 11), NPV ( n = 7), F1-score ( n = 3) , false positive rate ( n = 2) , Sweetspot and Softsoft flags ( n = 2) , diagnostic time ( n = 1) , and AUAFROC ( n = 1) , and JAFROC ( n = 1) . The second approach involved comparing clinician performance with and without the assistance of the artificially intelligent systems developed by the authors ( n = 11, 26.8%). The eleven studies employing this method comprised 92 clinicians (μ = 8, minimum = 1, maximum = 20 ) and 3 337 patients (μ = 370, 1 223 female, 1 416 male ). Similarly to the previous technique, an average of 4 performance metrics were used per paper (range = 1 – 6 ), including sensitivity ( n = 9), specificity ( n = 8), accuracy ( n = 8), PPV ( n = 6), NPV ( n = 5), AUC ( n = 2) , mean diagnostic time ( n = 2) , and error rate ( n = 1) . Comparison Against Standard/Established Clinical Tools In sixteen studies, assessing the usefulness of the models involved comparing their performance against well-established and routinely used clinical tools. In total, 11 659 patients (μ = 777 per paper, 4 521 female, 5 694 male ) were encompassed in these assessments, and twelve standard tools were used for comparisons. These included: (i) the 7th and 8th editions of the Tumor, Node, and Metastasis (TNM) staging system; (ii) the Brock University Model; (iii) the Fracture Risk Assessment Tool (FRAX); (iv) the Liver Cancer Study Group of Japan (LCSGJ); (v) the Mayo clinic model; (vi) the modified Glasgow Prognostic Score (mGPS); (vii) the Osteoporosis Self-Assessment Tool for Asians (OSTA); (viii) the second version of the Prostate Imaging Reporting and Data System (PI-RADS v2); (ix) the Peking University (PKU) model; (x) the PLCOm2012 model; (iv) the Response Evaluation Criteria in Solid Tumors (RECIST); (xi) the Veterans Affairs (VA) model; and (xii) the World Health Organization (WHO) performance status. Except for one study , all papers explicitly mention comparisons against these tools in the abstract. The TNM system, created by the American Joint Committee on Cancer (AJCC), is globally used in routine clinical procedures. It categorizes cancer progression and guides subsequent treatment decisions depending on (i) the size and extent of the primary tumor (T), (ii) if it has spread to nearby lymph nodes (N), and (iii) if it has metastasized to distant organs (M) . In this review, two text-based classification studies compared their models against the 7th edition of this staging system (TNM-7): one juxtaposed diagnostic and prognostic (3-year overall survival) predictions for bone metastasis in kidney cancer patients (323 women, 640 men) , while the other compared 1–10-year postoperative survival predictions for patients with colorectal cancer (607 women, 965 men) . Similarly, seven papers resorted to the 8th edition of AJCC TMN (TNM-8), its revised and updated version. On the one hand, in four articles, the models were only compared against this system. Two analyzed their text- and regression-based models to predict cancer-specific survival for esophageal (500 patients, 150 women, 350 men) and lung tumors (1 182 individuals, 642 female, 540 male) . The other two concerned the evaluation of classification models. Using preoperative images and descriptive data, one compared 2-year overall survival and 1-year recurrence-free survival predictions for patients with pancreatic cancer (27 female, 26 male) . The other compared risk stratification performance for overall survival for lung cancer patients (39 women, 133 men) between their model and the TMN-8 system using only text-based data . On the other hand, in three text-based studies, models were compared against TNM-8 and other tools. One paper also contrasted model performance for recurrence, recurrence-free survival, and overall survival for lung cancer patients (71 women, 88 men) with the WHO performance status, often used in oncology to determine patients' overall health status, prognosis, and the ability to tolerate treatment . This scaling system ranges from 0 to 4, where 0 represents no symptoms and pre-disease performance, and 4 translates to total disability. In the second article, predictions of overall postoperative survival were benchmarked against TNM-8 and LCSGJ (42 liver cancer patients, 12 women, 30 men) . LCSGJ is a group of Japanese medical professionals specializing in diagnosing and treating liver cancer, recognized as a leading authority in this cancer research field. Lastly, the third study describes the development of three risk models for breast cancer patients (150 women) : (i) fracture, whose predictions were contrasted with those generated by FRAX; (ii) osteoporosis, compared against and FRAX and OSTA; (iii) and survival, benchmarked against TNM-8. FRAX is a web-based tool designed to stratify 10-year bone fracture risk, and OSTA assesses the risk of osteoporosis in Asian populations . The Brock University (also known as PanCan) model is a logistic regression model devised to assist in risk stratification for lung cancer. It is recommended in the British Thoracic Society guideline as a tool to decide if nodules measuring 8 mm or more in maximum diameter should be assessed further with PET-CT . Here, it was applied in one of the selected papers to compare predictions of malignancy risk for lung cancer from CECT and NECT scans (1 397 images, 1187 patients, unknown gender proportion) . In addition to the Brock Model, comparisons in a second paper (978 CTs, 493 patients, 297 women, 196 men) were also performed against three other tools: (i) the Mayo model, which the Mayo Clinic developed to assess cancer prognosis and predict patient outcomes; (ii) the PKU model, created by the Peking University; and (iii) the VA model, which includes a comprehensive cancer care system that aims to provide high-quality, evidence-based care to veterans with cancer . The mGPS scale is a validated scoring system formulated to assess the prognosis of patients with advanced or metastatic cancer based on nutritional and inflammatory markers . In this review, it was used to establish clinical utility for a text-based classification model developed to predict overall survival for patients with unresectable pancreatic tumors (22 patients, 8 women, 14 men) . PI-RADS is a standardized system for interpreting and reporting findings from prostate MRI scans, created to guide clinical decision-making in diagnosing and treating prostate cancer. In this context, it was contrasted against a model developed to stratify low- and high-risk patients (39 and 14 men, respectively) . PLCOm2012 is a validated risk score that uses logistic regression to predict the probability of lung cancer occurrence within six years based on demographic and clinical information . It was the chosen comparator in a study predicting 12-year lung cancer incidence using low-dose CT images and patients’ age, sex, and smoking status (5493 images and patients, 2456 women, 3037 men) . Finally, RECIST is a set of guidelines used to evaluate the response of solid tumors to treatment in clinical trials and clinical practice. It was compared against two classification models: one aimed at detecting pathological downstaging in advanced gastric cancer patients from CECT images (86 patients and images, 23 women, 27 men) ; the other was designed to predict pathological tumor regression grade response to neoadjuvant chemotherapy in patients with colorectal liver metastases from MRI scans (61 images, 25 patients, 13 female, 12 male) . A few performance metrics were reported for the comparisons between the models developed in the selected papers and routinely used clinical tools, with an average of 3 metrics reported per document (range = 1 – 6). Here, the most frequently calculated metrics were AUC ( n = 11) and sensitivity ( n = 8), but PPV ( n = 5), C-index ( n = 4), specificity ( n = 4), accuracy ( n = 3), NPV ( n = 3), Brier Score ( n = 2) and F1-score ( n = 1) were also used in the evaluations. Four hundred-ninety-nine medical professionals of varying expertise were involved in these studies, with an average of 12 clinicians compared against each model (range = 1 – 109 ). These included endoscopists ( n = 204), oncologists ( n = 77), radiologists ( n = 76), general physicians ( n = 71), dermatologists ( n = 44), pathologists ( n = 21), ophthalmologists ( n = 3), and thoracic surgeons ( n = 3). A subset of 113 115 patients (102 178 female, 9 619 male) was used for these assessments, and identical performance metrics as those documented for external validation were observed, plus time until diagnosis. Specific clinicians’ years of experience were reported in 20 papers (48.8%), ranks (without years) in 11 (26.8%), and no information concerning expertise in 10 (24.4%). The 41 classification studies encompassing model comparison against clinicians can be divided into two designs: with and without the model and independent evaluation of the models and the clinicians. The most commonly adopted technique was separately assessing model and clinician performance and comparing it posteriorly ( n = 30, 73.2%). Four hundred-one clinicians (μ = 15 per report, range = 1 – 109) and 109 720 patients (μ = 3 657 per paper, 100 965 female, 8 203 male ) were involved in these papers, and model-clinician performance was compared for detection and diagnostic capabilities. An average of 4 performance metrics (range = 1 – 7 ) were computed per paper, with sensitivity being the most calculated ( n = 23), followed by specificity ( n = 18) and accuracy ( n = 15), AUC ( n = 11), PPV ( n = 11), NPV ( n = 7), F1-score ( n = 3) , false positive rate ( n = 2) , Sweetspot and Softsoft flags ( n = 2) , diagnostic time ( n = 1) , and AUAFROC ( n = 1) , and JAFROC ( n = 1) . The second approach involved comparing clinician performance with and without the assistance of the artificially intelligent systems developed by the authors ( n = 11, 26.8%). The eleven studies employing this method comprised 92 clinicians (μ = 8, minimum = 1, maximum = 20 ) and 3 337 patients (μ = 370, 1 223 female, 1 416 male ). Similarly to the previous technique, an average of 4 performance metrics were used per paper (range = 1 – 6 ), including sensitivity ( n = 9), specificity ( n = 8), accuracy ( n = 8), PPV ( n = 6), NPV ( n = 5), AUC ( n = 2) , mean diagnostic time ( n = 2) , and error rate ( n = 1) . In sixteen studies, assessing the usefulness of the models involved comparing their performance against well-established and routinely used clinical tools. In total, 11 659 patients (μ = 777 per paper, 4 521 female, 5 694 male ) were encompassed in these assessments, and twelve standard tools were used for comparisons. These included: (i) the 7th and 8th editions of the Tumor, Node, and Metastasis (TNM) staging system; (ii) the Brock University Model; (iii) the Fracture Risk Assessment Tool (FRAX); (iv) the Liver Cancer Study Group of Japan (LCSGJ); (v) the Mayo clinic model; (vi) the modified Glasgow Prognostic Score (mGPS); (vii) the Osteoporosis Self-Assessment Tool for Asians (OSTA); (viii) the second version of the Prostate Imaging Reporting and Data System (PI-RADS v2); (ix) the Peking University (PKU) model; (x) the PLCOm2012 model; (iv) the Response Evaluation Criteria in Solid Tumors (RECIST); (xi) the Veterans Affairs (VA) model; and (xii) the World Health Organization (WHO) performance status. Except for one study , all papers explicitly mention comparisons against these tools in the abstract. The TNM system, created by the American Joint Committee on Cancer (AJCC), is globally used in routine clinical procedures. It categorizes cancer progression and guides subsequent treatment decisions depending on (i) the size and extent of the primary tumor (T), (ii) if it has spread to nearby lymph nodes (N), and (iii) if it has metastasized to distant organs (M) . In this review, two text-based classification studies compared their models against the 7th edition of this staging system (TNM-7): one juxtaposed diagnostic and prognostic (3-year overall survival) predictions for bone metastasis in kidney cancer patients (323 women, 640 men) , while the other compared 1–10-year postoperative survival predictions for patients with colorectal cancer (607 women, 965 men) . Similarly, seven papers resorted to the 8th edition of AJCC TMN (TNM-8), its revised and updated version. On the one hand, in four articles, the models were only compared against this system. Two analyzed their text- and regression-based models to predict cancer-specific survival for esophageal (500 patients, 150 women, 350 men) and lung tumors (1 182 individuals, 642 female, 540 male) . The other two concerned the evaluation of classification models. Using preoperative images and descriptive data, one compared 2-year overall survival and 1-year recurrence-free survival predictions for patients with pancreatic cancer (27 female, 26 male) . The other compared risk stratification performance for overall survival for lung cancer patients (39 women, 133 men) between their model and the TMN-8 system using only text-based data . On the other hand, in three text-based studies, models were compared against TNM-8 and other tools. One paper also contrasted model performance for recurrence, recurrence-free survival, and overall survival for lung cancer patients (71 women, 88 men) with the WHO performance status, often used in oncology to determine patients' overall health status, prognosis, and the ability to tolerate treatment . This scaling system ranges from 0 to 4, where 0 represents no symptoms and pre-disease performance, and 4 translates to total disability. In the second article, predictions of overall postoperative survival were benchmarked against TNM-8 and LCSGJ (42 liver cancer patients, 12 women, 30 men) . LCSGJ is a group of Japanese medical professionals specializing in diagnosing and treating liver cancer, recognized as a leading authority in this cancer research field. Lastly, the third study describes the development of three risk models for breast cancer patients (150 women) : (i) fracture, whose predictions were contrasted with those generated by FRAX; (ii) osteoporosis, compared against and FRAX and OSTA; (iii) and survival, benchmarked against TNM-8. FRAX is a web-based tool designed to stratify 10-year bone fracture risk, and OSTA assesses the risk of osteoporosis in Asian populations . The Brock University (also known as PanCan) model is a logistic regression model devised to assist in risk stratification for lung cancer. It is recommended in the British Thoracic Society guideline as a tool to decide if nodules measuring 8 mm or more in maximum diameter should be assessed further with PET-CT . Here, it was applied in one of the selected papers to compare predictions of malignancy risk for lung cancer from CECT and NECT scans (1 397 images, 1187 patients, unknown gender proportion) . In addition to the Brock Model, comparisons in a second paper (978 CTs, 493 patients, 297 women, 196 men) were also performed against three other tools: (i) the Mayo model, which the Mayo Clinic developed to assess cancer prognosis and predict patient outcomes; (ii) the PKU model, created by the Peking University; and (iii) the VA model, which includes a comprehensive cancer care system that aims to provide high-quality, evidence-based care to veterans with cancer . The mGPS scale is a validated scoring system formulated to assess the prognosis of patients with advanced or metastatic cancer based on nutritional and inflammatory markers . In this review, it was used to establish clinical utility for a text-based classification model developed to predict overall survival for patients with unresectable pancreatic tumors (22 patients, 8 women, 14 men) . PI-RADS is a standardized system for interpreting and reporting findings from prostate MRI scans, created to guide clinical decision-making in diagnosing and treating prostate cancer. In this context, it was contrasted against a model developed to stratify low- and high-risk patients (39 and 14 men, respectively) . PLCOm2012 is a validated risk score that uses logistic regression to predict the probability of lung cancer occurrence within six years based on demographic and clinical information . It was the chosen comparator in a study predicting 12-year lung cancer incidence using low-dose CT images and patients’ age, sex, and smoking status (5493 images and patients, 2456 women, 3037 men) . Finally, RECIST is a set of guidelines used to evaluate the response of solid tumors to treatment in clinical trials and clinical practice. It was compared against two classification models: one aimed at detecting pathological downstaging in advanced gastric cancer patients from CECT images (86 patients and images, 23 women, 27 men) ; the other was designed to predict pathological tumor regression grade response to neoadjuvant chemotherapy in patients with colorectal liver metastases from MRI scans (61 images, 25 patients, 13 female, 12 male) . A few performance metrics were reported for the comparisons between the models developed in the selected papers and routinely used clinical tools, with an average of 3 metrics reported per document (range = 1 – 6). Here, the most frequently calculated metrics were AUC ( n = 11) and sensitivity ( n = 8), but PPV ( n = 5), C-index ( n = 4), specificity ( n = 4), accuracy ( n = 3), NPV ( n = 3), Brier Score ( n = 2) and F1-score ( n = 1) were also used in the evaluations. Fifty-one papers (91.1%) describe models developed for primary tumor-related assessments. These include cancers of the CNS (brain ), digestive (colorectal , esophageal , gastric , and hepatic malignancies), endocrine (pancreas and thymus ), genitourinary (bladder , cervix , prostate , and uterus ), and integumentary (breast and skin ) systems, respiratory system and associated tissues (larynx , lung , mesothelium , and nasopharynx ), and the skeleton (cartilages and bones ). Central nervous system Three retrospective studies were developed to diagnose brain cancers using MRI scans, amounting to 1 084 patients and 64 459 images, resulting in an average sensitivity of 81.97% and specificity of 91.63 (Table ) . The first involved the following conditions: acoustic neuroma, pituitary tumor, epidermoid cyst, meningioma, paraganglioma, craniopharyngioma, glioma, hemangioblastoma, metastatic tumor, germ cell tumor, medulloblastoma, chordoma, lymphomas, choroid plexus, papilloma, gangliocytoma, dysembryoplastic neuroepithelial tumor, and hemangiopericytoma . The CNN-based model was trained on images from 37 871 patients and externally validated using 64 414 T1-weighted, T2-weighted, and T1c MRI scans from 1039 subjects (600 female, 349 male) from three institutions. Its diagnostic performance was compared against nine neuroradiologists (5 to 20 years of experience) to assess clinical utility. This CNN classified brain tumors with high accuracy, sensitivity, and specificity, performing particularly well in identifying gliomas, which are difficult to diagnose using traditional imaging methods. When aided by the model, the neuroradiologists' accuracy increased by 18.9%, which was still lower than the model alone. AI assistance also boosted the neuroradiologists' sensitivity, specificity, and PPV. However, only three types of scans were considered, training data was obtained from a single center, and few rare tumors were included. In the second paper, the authors explored the combination of 9 different ML models – NB, logistic regression, SVM with a polynomial kernel, kNN (k = 3), DT, MLP, RF, AdaBoost, and bootstrap aggregating – to distinguish between different types of brain tumors (glioblastoma, anaplastic glioma, meningioma, primary central nervous system lymphoma, and brain metastasis) . MRI techniques were analyzed in a combination of 135 classifiers and radiomics: cMRI, advMRI, phyMRI, cMRI + phyMRI, and advMRI + phyMRI. A dataset of 167 patients was used for training, and temporal validation was performed on 20 subjects. Physiological MRI scans (phyMRI), named radiophysiomics, achieved the best results using AdaBoost with cMRI and phyMRI and RF with phyMRI. Both models surpassed the radiologists in AUC and F1-score but were outperformed in sensitivity and specificity. The AdaBoost model also had a higher PPV than the clinicians. However, this was a single-center, retrospective study, and the application and tuning of the models were performed manually. The third study evaluated the usefulness of preoperative contrast-enhanced T1- and T2-weighted MRI in differentiating low-grade gliomas (LGG) from glioblastomas (GBM) . The authors trained a radiomics-based RF classifier on 142 patients from 8 American centers and externally validated it on 25 patients from another institution (all from The Cancer Imaging Archive). The results showed that the machine learning algorithm was highly accurate in differentiating between GBM and LGG based on preoperative contrast-enhanced MRI scans, surpassing two neuroradiologists (15 and 1 year of experience) and a radiologist (3 years of experience). However, few patients from a public database were collected, possibly resulting in selection bias (non-random selection). Digestive system Malignancies of the digestive system – highlighted in Table – were the most comprehensively studied (17/56, 30.4%), encompassing colorectal ( n = 7, 41.2%), esophageal ( n = 3, 17.6%), gastric ( n = 5, 29.4%), and liver ( n = 2, 11.8%) cancers. Colorectal Cancer Three sets of articles addressed colorectal cancers (7 papers). The goal of the first set, consisting of four multi-institutional retrospective studies, was its diagnosis, averaging a sensitivity of 77.3% and a specificity of 93.2% for tests on 995 images from different sources . The authors in developed an ensemble of three CNNs (Inception-v3, ResNet-50, and DenseNet-161) to predict the histology of colorectal neoplasms based on white light colonoscopic images. The ensemble model transferred knowledge from digital photography and learned with colonoscopic images to classify the images into one of 4 different pathologic categories: normal (healthy), adenoma with low-grade dysplasia (A-LGD), adenoma with high-grade dysplasia (A-HGD), and adenocarcinoma. The system's diagnostic performance was compared against four experts (more than five years of experience) and six trainees (less than two years). In the external validation dataset (400 images, 100 of each type), the CNN-CAD model achieved high accuracy in predicting the histology of the lesions. Compared to endoscopists, the model's performance was slightly better than the experts' and significantly outperformed the trainees. In addition, the authors used Grad-CAM to create a heatmap highlighting the regions of the input image that were most relevant to the network's decision. However, only one image per polyp was used; consequently, tumors that cannot be contained within a single image were neglected. The second work concerns the external validation and clinical utility assessment of EndoBRAIN, an AI-assisted system to classify colorectal polyps into malignant or non-malignant. EndoBRAIN was trained with 69 142 endocytoscopic images from patients with colorectal polyps from five academic centers in Japan. Its clinical validity had previously been confirmed in a single-center prospective study. However, since its implementation depends on governmental regulatory approval, the current study compared EndoBRAIN's diagnostic performance against 30 endoscopists (20 trainees, 10 experts) using stained and narrow-band endocytoscopic images in a web-based trial. The authors found their CADx tool accurately differentiated neoplastic from non-neoplastic lesions, outperforming all endoscopists for stained images, achieving similar performance in narrow-band images, and being accepted for clinical use. The third diagnostic model concerns the development of a deep learning model to predict the revised Vienna Classification in colonoscopy, which categorizes colorectal neoplasms into different levels of malignancy using standard endoscopic colonoscopy images . Several CNN architectures were compared, namely AlexNet, ResNet152, and EfficientNet-B8, with ResNet152 being chosen as the prediction model due to its higher accuracy and fastest inference time. The model was trained using 56,872 colonoscopy images (6775 lesions) and validated on 255 images (128 lesions) from 7 external institutions in Japan. The authors also compared diagnostic performance against endoscopists (five novices, three fellows, and four experts). The AI system’s sensitivity and specificity exceeded that of all endoscopists. Nevertheless, the model cannot discriminate between high-grade dysplasia and invasive cancer (categories 4 and 5 of the revised Vienna Classification), and only binary classification is supported. In the fourth document, the authors tested two pre-trained radiomics-based CNN architectures (Inception-ResNet-v2 and ResNet-152) to classify colorectal neoplasms into three types of sets automatically: 7-class (T1-4 colorectal cancer, high-grade dysplasia, tubular adenoma, vs. non-neoplasms), 4-class (neoplastic vs. non-neoplastic – advanced vs. early CRC vs. adenoma vs. healthy), and 2-class (neoplastic versus non-neoplastic and advanced versus non-advanced lesions) . The CNNs were trained on a South Korean dataset (3453 colonoscopy images, 1446 patients) and temporally and geographically validated on 240 images (and as many patients) from another institution. CAM was used to highlight its decisions. The best-performing architecture was ResNet-152 for 7-way and 4-way diagnoses, but Inception-ResNet-v2 achieved better results on binary classifications. In addition, the model's performance was compared with one novice and two experienced endoscopists with six months and more than five years of colonoscopy experience, respectively. Although resulting in high accuracy, neither CNN architecture could outperform the endoscopists. Furthermore, this retrospective study only considered three types of diseases and white-light colonoscopy images. The second set of articles was devoted to predicting outcomes from MRI scans in patients with colorectal cancer undergoing neoadjuvant chemotherapy (NCRT), accruing 143 MRIs from 118 patients and a mean AUC and accuracy of 0.77 and 81.9%, respectively . The first was a prospective study using a multipath CNN on MRI scans (diffusion kurtosis and T2-weighted) . The authors used a dataset of 412 patients (290 for development and 93 for temporal validation) with locally advanced rectal adenocarcinoma scheduled for NCRT. The researchers developed three multipath CNN-based models: one to preoperatively predict pathologic complete response (pCR) to neoadjuvant chemoradiotherapy, one to assess tumor regression grade (TRG) (TRG0 and TRG1 vs. TRG2 and TRG3), and one to predict T downstaging. In addition, the authors evaluated the models' utility by comparing two radiologists' – with 10 and 15 years of experience – performance with and without their assistance. The results showed excellent performance in predicting pCR, superior to the assessment by the two radiologists, whose error rate was also reduced when assisted by the DL model. Although with lower performance, the TRG and T downstaging models also achieved promising results with an AUC of 0.70 and 0.79, respectively (although not outperforming the clinicians). Nevertheless, this monoinstitutional research required manual delineation, and interobserver variability was not analyzed. Moreover, further validation studies are necessary to assess performance with different MRI scanners. The second group of researchers developed an MRI-based CNN (DC3CNN) to predict tumor regression grade (assessment of tumor size) in response to NCRT in patients with colorectal liver metastases . The authors used prospective internal (328 lesions from 155 patients) and retrospective external cohorts (61 images, 25 patients) to collect pre and post-treatment T2-weighted- and DW-MRI scans. The model surpassed the diagnostic accuracy of RECIST, the most commonly used criteria for clinical evaluation of solid tumor response to chemotherapy. However, the study was retrospective, and further studies are needed to validate its performance in larger ethnically diverse patient populations. Lastly, only one model assessed postoperative survival of colorectal cancer using text-based data . The model was trained on the SEER database (364 316 patients) and externally validated (temporally and ethnically) on a Korean dataset (1 572 subjects, 607 women, 965 men). The authors compared 4 ML algorithms, namely logistic regression, DTs, RFs, and LightGBM, to obtain an optimal prognostic model. The best-performing model – LightGBM – outperformed TNM-7 in predicting survival for all tested periods (1, 2, 3, 4, 5, 6, 8, and 10 years). Still, data were collected retrospectively from a public database and a single institution using only text-based data, so prospective studies are necessary, and clinicopathological, molecular, and radiologic variables should also be incorporated. Esophageal Cancer Three studies involved esophageal cancers. Two papers studied neoplasia detection in patients with Barrett’s esophagus, a medical condition resulting from long-term acid-reflux damage, causing esophageal tissue lining to thicken and become irritated, increasing cancer risk . The same group of researchers conducted both studies: the first paper describes model development for detection , while the second encompasses its tuning and update to include location . The authors proposed a multi-stage pretraining approach that involved training a CNN learning model on 494,355 gastrointestinal images before fine-tuning it on a smaller dataset of medical images specific to Barrett's neoplasia. The model was trained with images from different endoscopes. In the first paper , using data from separate institutions, the authors used a retrospective dataset of early Barrett’s neoplasia for primary validation (80 patients, unknown proportion) and a second prospectively acquired dataset (80 patients and images) to compare their model’s performance against fifty-three endoscopists (17 seniors, 8 juniors, 18 fellows, and 10 novices). In the second paper, the researchers validated their model on three prospective datasets: one with clinically representative images (80 individuals), one with subtle lesions (80 subjects), and one in a live setting with dysplastic and nondysplastic patients (ten each) . It showed excellent performance on the three external validation datasets, and its detection and location performances were also compared against the 53 experienced endoscopists on the subtle lesions. The CAD system outperformed all 53 endoscopists for all tested metrics in both papers, obtaining an average accuracy, sensitivity, and specificity of 87.9%, 91.7%, and 84.16%, respectively. The models developed in both articles performed similarly and were tested in clinically realistic scenarios, with an average accuracy, sensitivity, and specificity of 88.45%, 91.25%, and 85.63%, respectively, enhancing CNNs’ predictive power. Additionally, a retrospective study evaluated cancer-specific survival for esophageal adenocarcinoma and squamous cell carcinoma according to individual treatment recommendations . The authors trained a deep-, regression-, and text-based survival neural network (DeepSurv, multi-layer perceptron) using the SEER database (6855 patients) and validated it on 150 women and 350 men from their institution (China). Additionally, prognostic performance was compared against TNM-8, having exceeded it. However, only one medical center was used, and research was not performed in an accurately representative clinical setting. Gastric Cancer In five articles, models were developed for gastric-related tasks. The first three studies had a diagnostic component. In the first research, the authors developed two models – GastroMIL and MIL-GC –, training them on WSIs from H&E slides magnified 30 times collected from TCGA and a Chinese institution. They also temporally and geographically validated them with 175 WSIs from 91 patients from NHGRP . GastroMIL used an ensemble of a CNN and an RNN to distinguish gastric cancer from normal gastric tissue images. Its performance was compared against one junior and three expert pathologists. MIL-GC, a regression-based model, was created to predict patients’ overall survival. Besides WSIs, MIL-GC uses clinical data, namely survival state, overall survival time, age, sex, tumor size, neoplasm histologic grade, and pathologic T, N, M, and TNM-8 stages. The deep learning models achieved high performance in both tasks, with an overall accuracy of 92% for diagnosis and a C-index of 0.657 for prognosis prediction in the external dataset. Compared to human performance, GastroMIL outperformed the junior pathologist in accuracy and sensitivity but was surpassed by the experienced pathologists (in accuracy, sensitivity, and specificity). However, the tested cohorts were retrospective and had unbalanced survival times, and clinical utility was not evaluated for the prognostic model. The second study used a CNN (ResNet-50) for real-time gastric cancer diagnosis . The model was developed with 3 407 endoscopic images of 666 patients with gastric lesions from two institutions. The DCNN model was tested on a temporally different dataset of endoscopic videos from a separate institution (54 videos from 54 patients), and performance was compared against 20 endoscopists (6 experts, 14 novices). The model achieved better performance than any of the endoscopists, and diagnostic accuracy, sensitivity, and specificity increased for all clinicians while assisted by the model. Nevertheless, despite decreasing the aggregate diagnostic time from 4.35 s to 3.01 s, it increased experts’ by 0.10 s. In addition, the diagnostic model was only tested on high-quality images, and the validation dataset was small and domestic. Although slightly less sensitive than Gastro-MIL (93.2% vs. 93.4%), the model developed in achieved the best accuracy and sensitivity, evidencing that endoscopic images and videos might be more appropriate to diagnose gastric cancer. The third model was created using endoscopic ultrasonography images (EUS) for the differential diagnosis of gastric mesenchymal tumors, including GISTs, leiomyomas, and schwannomas . This model was trained with EUS from three Korean institutions and tested on a temporally separate set of 212 images from the same centers (69 patients, 38 female, 31 male). A sequential analysis approach was adopted using two CNNs: the first classifies the tumor as GIST or non-GIST; for non-GISTs, the second CNN classifies it as either a leiomyoma or schwannoma. The results were compared against junior ( n = 3, less than 200 examinations) and expert endoscopists ( n = 3, more than 500 examinations) who evaluated the same images, having surpassed them in both types of classification. However, this study was retrospective and involved a small number of patients, and the types of equipment used to perform ultrasounds varied considerably across the facilities. The last two papers concerned outcome predictions. The first presents a multi-institutional study that uses multitask deep learning to predict peritoneal recurrence and disease-free survival in gastric cancer patients after curative-intent surgery based on CT images . Supervised contrastive learning and a dynamic convolutional neural network were combined to achieve this purpose, and Grad-CAM was used to explain the model’s decisions. The model included CT scans from three patient cohorts, and external validation included 1 043 patients (329 women, 714 men) and as many images from another Chinese institution. In addition, the authors investigated clinician performance for peritoneal recurrence prediction with and without the assistance of the AI model, having found that performance was significantly enhanced after integrating it and that the model alone surpassed all physicians. Nonetheless, only East Asian patients were included in this retrospective study, which was not performed in a real clinical setting, and sensitivity was only reported for one of the clinicians. The last study discusses the use of CT radiomics to predict the response of advanced gastric cancer to neoadjuvant chemotherapy and to detect pathological downstaging at an early stage . The authors trained two SVCs on 206 patients who had undergone three or four cycles of chemotherapy and externally validated them on two testing cohorts, which were also used for benchmarking detection against RECIST. The first testing cohort consists of temporal validation (40 patients and CTs, 13 women, 27 men), while the second differs in the number of chemotherapy cycles (46 individuals and CTs, 10 women, 36 men). Performance for the detection model surpassed RECIST in both cohorts, and, except for sensitivity, the response prediction model also produced positive results. However, retrospective data and a small, unbalanced sample size constrain this study, which was not evaluated in a clinically representative setting. Liver Cancer Two models were developed for liver cancer-related predictions. The first aimed at classifying hepatocellular carcinomas and cholangiocarcinomas (differential diagnosis) . The authors developed a web-based (cloud-deployed AI model and browser-based interface) CNN (DenseNet architecture) using WSIs from H&E slides magnified 40 times and used Grad-CAM to increase the model’s explainability. The training dataset was obtained from TCGA (70 slides from 70 unique patients). The external validation dataset was collected from the Department of Pathology at Stanford University Medical Center (80 slides from 24 women and 56 men). The model achieved a diagnostic accuracy of 84.2% in the validation cohort. Diagnostic performance was also compared to that of 11 pathologists. Except for the two unspecified pathologists, performance (AUC) increased for all clinicians when assisted by this tool. However, the pathologists only had access to the WSIs (as opposed to being complemented with clinical data), the model required manual intervention for patch selection, and the study was retrospective with a small sample size (development and external validation with a total of 150 WSIs and patients). The second model was designed to predict three-year overall survival for intrahepatic cholangiocarcinoma patients after undergoing hepatectomy using an ensemble of Random Forests, XGBoost, and GBDT . Using a single quaternary Chinese institution, the authors collected 1390 patients for training and 42 patients (12 women, 30 men) for external temporal validation. Results were compared against the TNM-8 and LCSGJ staging systems, with model performance exceeding that of the routinely used tools. Nonetheless, this was a monoinstitutional endeavor limited to a small number of Asian patients. Furthermore, only six prognostic factors were used: carcinoembryonic antigen, carbohydrate antigen 19–9, alpha-fetoprotein, pre-albumin, and T and N stages. Endocrine system Three papers described prognostic models for cancers in organs affecting the endocrine system (pancreas and thymus), whose results are depicted in Table . Pancreatic Cancer The first two studies assessed survival for pancreatic ductal adenocarcinoma (PDAC) patients but adopted disparate research designs and clinical inputs . The first group of researchers used a regression-based random survival forest model to prognosticate patients with advanced pancreatic cancer . Aimed at predicting overall survival for patients with unresectable PDAC, the model was developed with clinical data and CT scans from a German institution (203 patients). It was temporally and geographically validated using only text-based clinical data from patients with liver metastases from the same country (8 women, 14 men) and compared against mGPS, having outperformed it. Additionally, the authors used SHAP to explain their model, finding that inflammatory markers C-reactive protein and neutrophil-to-lymphocyte ratio had the most significant influence on its decision-making. Nonetheless, only twenty national patients were used to validate the model externally, and different types of inputs were used for training and testing. The second set of authors used an ensemble of ML methods – ANN, logistic regression, RF, GB, SVM, and CNNs (3D ResNet-18, R(2 + 1)D-18, 3D ResNeXt-50, and 3D DenseNet-121) – to predict 2-year overall and 1-year recurrence-free survival for PDAC patients after surgical resection . The classifier was trained and tuned using 229 patients and temporally validated with CECT images and seventeen clinical variables from the same South Korean institution (53 CECTs from 27 women and 26 men). Grad-CAM was used to explain the model’s decisions, and comparisons were made against TMN-8 to evaluate clinical utility. Although more accurate, specific, and with a higher PPV than TNM-8, it was less sensitive for both predictions and had a lower NPV for overall survival prediction. Furthermore, tumor margins were manually segmented, and the model did not consider histopathologic data. Thymic Cancer One study was designed for the simplified risk categorization of thymic epithelial tumors (TETs), rare cancer forms . Here, three types of tumors were evaluated: low-risk thymoma (LRT), high-risk thymoma (HRT), and thymic carcinoma (TC). Three triple classification models were developed using radiomic features extracted from preoperative NECT images and clinical data from 433 patients: (i) LRT vs. HRT + TC; (ii) HRT vs. LRT + TC; (iii) TC vs. LRT + HRT. The authors compared several CT-based classifiers: logistic regression, linear SVC, Bernoulli and Gaussian Naïve Bayes, LDA, Stochastic Gradient Descent, SVM, DT, kNN, MLP, RF, AdaBoost, gradient boosting, and XGBoost. Combined with clinical data, the SVM model demonstrated the best performance for predicting the simplified TETs risk categorization. In addition, the SVM model was validated in a temporally different cohort using images from 5 types of scanners (76 scans and patients, 33 women, 48 men). Finally, its diagnostic performance was compared against three radiologists (3, 6, and 12 years of experience), having exceeded them regarding AUC (0.844 versus 0.645, 0.813, and 0.724) but not for other metrics (accuracy, sensitivity, and specificity). Caveats include the reduced amount of patients, low number of thymic carcinomas, and incomplete automation of the models. Genitourinary system Table illustrates the models developed for genitourinary cancers, including the bladder, cervix, prostate, and uterus. Bladder Cancer From the retrieved models, only one assesses outcomes for primary bladder cancers . This article presents a CNN-based strategy to predict the muscular invasiveness of bladder cancer based on CT images and clinical data. The model was developed with 183 patients. Its performance was tested on an independent institution's temporally and geographically different validation cohort of patients with urothelial carcinoma (13 women, 62 men, and as many images). The model’s predictions were juxtaposed with diagnoses from two radiologists with nine and two years of experience, having achieved better accuracy and specificity than the two clinicians but a lower sensitivity. Overall, the authors found that the deep learning algorithm achieved a high accuracy rate in predicting muscular invasiveness, an essential factor in determining the prognosis and treatment of bladder cancer. However, the study is limited by its retrospective nature, exclusion of tumors not visible in CT images, and small sample size. Cervical Cancer Similarly, primary tumors of the cervix were only screened in one paper . Here, the authors trained an ensemble of convolutional and recurrent neural networks on whole-slide images from patients' cervical biopsies and 79 911 annotations from five hospitals and five kinds of scanners. The system comprises (i) two CNNs – the first scans WSIs at low resolution and the second at high resolution – to identify and locate the ten most suspicious areas in each slide; (ii) and an RNN to predict corresponding probabilities. The system classifies squamous and glandular epithelial cell abnormalities as positive (neoplastic) and normal findings as negative for intraepithelial lesions or malignancies (non-neoplastic). The method was externally validated on multi-center independent test sets of 1 565 women (1 170 without additional conditions and 395 with HPV), and classification performance was compared against three cytopathologists. Although obtaining promising results and surpassing clinician performance for both types of women, the authors highlight that the model was designed for the general women population, implying that further refinements are required for specific comorbidities. Prostate Cancer Two models were developed for prostate-cancer-related classifications using multiparametric MRI scans . In the first paper, the authors describe the development of Autoprostate, a system employing deep learning to generate a report summarizing the probability of suspicious lesions qualifying as clinically significant prostate cancer (CSPCa) . The authors trained their approach on the PROSTATEx dataset (249 men), externally validated it on the PICTURE dataset (247 patients), and compared its reports (with post-thresholding and false positive reduction) to those generated by a radiologist with ten years of experience. The system achieved a high level of agreement with the human reports (surpassing the radiologist in AUC and specificity) and could accurately identify CSPCa. However, this study was retrospective, a single (public) dataset was used for external validation, and only two types of prostate lesions were considered. The second article presented an ML-based approach for prostate cancer risk stratification using radiomics applied to multiparametric MRI scans . In this retrospective, monoinstitutional study, the authors compared seven classification algorithms: logistic regression, linear, quadratic (Q), cubic, and Gaussian kernel-based SVM, linear discriminant analysis, and RF. After training with 68 patients, the best-performing method – QSVM – was validated on a temporally independent dataset (14 high- and 39 low-risk patients). Its performance was compared against PI-RADS v2, having found that the model could accurately predict the risk of clinically significant prostate cancer. Although the classifier performed equivalently to PI-RADS v2 regarding AUC, it performed substantially better in class-specific measures (F1-score, sensitivity, and PPV), especially for the high-risk class. However, the study is limited by its retrospective nature and small sample size from a single source. Uterine Cancer Two studies for primary cancers focused on classifying lesions of the endometrium, the layer of tissue lining the uterus . In the first article, using 245 women as the training cohort, the authors compared nine models – logistic regression (LR), SVM, stochastic gradient descent, kNN, DT, RF, ExtraTrees, XGBoost, and LightGBM – to obtain an optimal algorithm for differential diagnosis (malignant versus benign tumors) . A radiomics score (radscore) was computed for the best-performing algorithm (logistic regression), and four models were selected using different combinations of T1-weighted, T2-weighted, and DWI MRI features: (i) the radiomics model; (ii) a nomogram, combining the radscore and clinical predictive parameters; (iii) a two-tiered stacking model, where the first tier was the clinical model and the optimal radiomics model (LR), and the second tier used the output of the first tier as the input of the multivariate LR; and (iv) an ensemble model, where the predictions obtained from the preceding clinical model and radiomics model were calculated by an accuracy-weighted average. The results showed that all four models accurately differentiated stage IA endometrial cancer and benign endometrial lesions. Furthermore, during external validation (44 MRIs from 44 women), the authors found that the nomogram had a higher AUC than the radiomics model, revealing more stable discrimination efficiency and better generalizability than the stacking and ensemble models and a radiologist with 30 years of experience (except in sensitivity). Nevertheless, data was collected from two same-country centers (Chinese institutions), only standard radiomics features were extracted, and lesions were manually segmented, which is highly time-consuming. The second paper encompassed a global-to-local multi-scale CNN to diagnose endometrial hyperplasia and screen endometrial intraepithelial neoplasia (EIN) in histopathological images . The researchers trained the CNN using a large annotated dataset (6 248 images) and tested it on a temporally different set of patients (1631 images, 135 specimens, 102 women). They found that it performed well in diagnosing endometrial hyperplasia and detecting EIN, outperforming a junior pathologist (2 years of experience) and obtaining comparable performance to a mid-level and a senior pathologist (6 and 25 years of experience, respectively). The authors used Grad-CAM to emphasize the regions the model deemed relevant for diagnosis. However, this retrospective study only used histopathological images (as opposed to WSIs). Besides, it focused solely on classifying healthy slides, hyperplasia without atypia, and endometrial intraepithelial neoplasia, thus neglecting the differentiation between benign lesions and endometrial cancer. Integumentary system As illustrated in Table , five papers studied cancers of the integumentary system, focusing on the breasts and skin. Breast Cancer Three studies developed models for cancers originating in the breasts, each with a specific purpose and using different clinical modalities. In , several text-based machine learning classifiers, namely, DTs, RFs, MLPs, logistic regression, naïve Bayes, and XGBoost, were compared to establish optimal classifiers for osteoporosis, relative fracture, and 8-year overall survival predictions. The algorithm was trained on 420 patients from a Chinese institution and geographically validated on 150 women from a separate local institution. The osteoporosis model was compared against OSTA and FRAX, the fracture model against FRAX, and the prognostic model against TNM-8. The results showed that the XGBoost classifier performed the best for the three tasks and outperformed the other clinical models. Additionally, for explainability, the authors also used SHAP for feature importance analysis for each model: (i) age, use of anti-estrogens, and molecular type are the most predictive of osteoporosis; (ii) osteoporosis, age, and bone-specific alkaline phosphatase are the best predictors for fracture; and (iii) N-stage, molecular type, and age have the highest prognostic value for overall survival. Despite its positive results, prospective studies are needed to validate the model in more diverse patient populations. In , authors explored how combining AI and radiologists can improve breast cancer screening. Using 213 694 retrospectively collected mammograms (X-ray images) from 92 585 women, it was found that the combination of radiologists and AI (CNN-based classifier) achieved the highest accuracy in detecting breast cancer. The sensitivity and specificity of the standalone AI system were significantly lower than an unaided radiologist. However, the decision-referral approach outperformed the unaided radiologist on both sensitivity and specificity for several tested thresholds. Nonetheless, the study only included mammogram images and did not consider other factors, such as patient history or clinical data, which may impact the accuracy of breast cancer screening. Furthermore, the AI algorithm used in the study was not optimized for clinical use and may require further development and testing before it can be implemented in a clinical setting. Lastly, the work developed in entailed diagnosing non-cystic benign and malignant breast lesions from ultrasonographic images. Radiomic features were extracted from the ultrasound images, and a random forest model was trained with 135 lesions and externally validated to predict malignancy for each lesion. Moreover, the performance of an experienced radiologist (8 years) was compared with and without the model’s assistance. Although not with statistical significance, the radiologist's assessments improved when using the AI system. However, the final validation population was small (66 ultrasounds from 57 women) and showed different proportions of malignant lesions. Skin Cancer Two models were developed to diagnose skin tumors using photographs, producing an average AUC, sensitivity, and specificity of 0.89, 77.1%, and 81.74% . The first was a retrospective validation study assessing the performance of deep neural networks in detecting and diagnosing benign and malignant skin neoplasms of the head and neck, trunk, arms, and legs . In a previous study, the authors trained an ensemble of CNNs (SENet + SE-ResNeXt-50 + faster RCNN) with 1 106 886 image crops from South Korean patients to detect potential lesions and classify skin malignancies. Here, performance was tested on three new temporal and geographical validation datasets of skin lesions (two national, one international, 46 696 photographs from 10 876 patients): (i) one dataset was used to compare the model’s classification performance against 65 attending physicians in real-world practice; (ii) one’s goal was to evaluate classification performance against with 44 dermatologists in an experimental setting; and (iv) the last two were meant to predict exact diagnosis (1 of 43 primary skin neoplasms) in a local (South Korean) and an international (UK, 1 300 images) dataset, with the first also being compared against physicians. In (i) and (ii), performance was calculated for high specificity and high sensitivity thresholds. The algorithm was more sensitive and specific than the dermatologists in the experimental setting. However, attending physicians outperformed it in real-world practice in all tested metrics (sensitivity, specificity, PPV, and NPV). In addition, the model only dealt with high-quality clinical photographs, and there was a lack of ethnic diversity in the study population. The second paper presented a set of CNNs – DenseNet-121 (Faster R-CNN and deep classification network) – developed to detect malignant eyelid tumors from photographic images . The researchers used a 1 417 clinical images dataset with 1 533 eyelid tumors from 851 patients across three Chinese institutions (one for development and two for external validation). Besides using Grad-CAM for interpretation, the AI’s performance on the external dataset (266 pictures from 176 patients) was compared to three ophthalmologists: one junior, one senior, and one expert (3, 7, and 15 years of experience, respectively). It surpassed the junior and senior ophthalmologists’ performance and achieved similar results to the expert. Notwithstanding its potential, the system still needs evaluation on non-Asian populations and prospectively acquired datasets, and it was only developed for detection (it cannot provide a specific diagnosis). Respiratory system and associated tissues Thirteen papers addressed respiratory system cancers, which predominantly concerned the lungs, but also included the larynx, nasopharynx, and mesothelium (Table ). Lung Cancer Ten approaches were developed for lung cancer assessments. The first document describes a validation study of a CNN-based tool (DenseNet) designed to predict the malignancy of pulmonary nodules . The model was previously trained with the NLST dataset and was now externally validated in 3 UK centers with different CT scanners (1 397 CECTs and NECTs, 1 187 patients of unknown gender ratio). The authors also evaluated its clinical utility by comparing it to the Brock Model. Although slightly less specific than the Brock model, the detection algorithm developed by the authors had a higher AUC and sensitivity. Despite having undergone international validation, prospective studies in ethnically diverse populations are still amiss. The second paper involved developing and validating a model to predict the malignancy of multiple pulmonary nodules from CT scans and eleven clinical variables . The study analyzed data from various medical centers. Ten ML methods were compared to identify the best malignancy predictor: AdaBoost, DT, Logistic Regression, Linear SVM, Radial Basis Function Kernel SVM, NB, kNN, Neural Net, Quadratic Discriminant Analysis, RF, and XGBoost. The best-performing model – XGBoost – was tested on three datasets. The first was retrospective, compiled from 6 institutions (five from China and one from South Korea), used for primary external validation (220 patients, 583 CT scans), and compared against four well-established models: Brock, Mayo, PKU, and VA. The second retrospective dataset was used for generalizability, containing patients from a Chinese institution with solitary pulmonary nodules (195 patients and images, 110 women, 85 men), whose results were also compared against the four just-mentioned models. The third and last dataset included data from 4 Chinese centers and was collected prospectively for secondary validation and comparisons against clinicians (200 CTs, 78 patients, 51 women, 27 men). This comparison involved three thoracic surgeons and one radiologist, who achieved an average sensitivity of 0.651 and specificity of 0.679. The model significantly outperformed this average and each clinician’s AUC, as well as in all comparisons against the routinely used models. In addition, SHAP was used to identify the most predictive nodule characteristics, finding that the model's most predictive features were nodule size, type, count, border, patient age, spiculation, lobulation, emphysema, nodule location, and distribution. Nonetheless, besides not reporting individual clinician sensitivity and specificity in the prospective cohort, the drawbacks of this study include only assessing typical high-risk patients and the lack of validation with different ethnicities. The work in involved a CNN-based model for predicting the presence of visceral pleural invasion in patients with early-stage lung cancer. The deep learning model was trained using a dataset of CT scans from 676 patients and externally validated on a temporally different cohort from the same South Korean institution (141 CTs from 84 women and 57 men). Besides using Grad-CAM to evidence its decisions, this CNN can adapt its sensitivity and specificity to meet the clinical needs of individual patients and clinicians. The model achieved a performance level comparable to three expert radiologists but did not surpass it except in PPV. Besides, these are results from a monoinstitutional retrospective study where geographical validation was not performed. In addition to using a small number of patients, data was also imbalanced, and the model was not fully automated (required manual tumor annotations). The fourth article concerns developing an EfficientNetV2-based CNN system to predict the survival benefit of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in patients with stage IV non-small cell lung cancer . The model was developed with accessible pre-therapy CT images from five centers and externally validated on a monoinstitutional dataset from a national dataset (China, 92 CTs from 92 patients). The authors also compared radiologists' and oncologists' (three each, 2, 5, and 10 years of experience) performance with and without ESBP. The results showed that, while assisted by the system, all radiologists improved their diagnostic accuracy, sensibility, specificity, PPV, and NPV (except for the trainee oncologist, who achieved better sensitivity without the model). However, prospective studies in ethnically rich cohorts are still necessary to implement this tool in clinical practice. The fifth study aimed at finding optimal predictors of two-year recurrence, recurrence-free survival, and overall survival after curative-intent radiotherapy for non-small cell lung cancer . Ten text-based ML models were trained on 498 patients and compared: ANN, Linear and Non-linear SVM, Generalized Linear Model, kNN, RF, MDA, Partial Least Squares, NB, and XGBoost. The best-performing models were as follows: (i) an ensemble of kNN, NB, and RF for recurrence classification; (ii) kNN for recurrence-free survival prediction; and (iii) a combination of XGBoost, ANN, and MDA for overall survival. The three optimal predictors were externally validated using routinely collected data from 5 UK institutions (159 seniors, 71 women, 88 men) and compared against TNM-8 and WHO performance status. The recurrence and overall survival models outperformed both routinely used systems, but these tools surpassed the recurrence-free survival predictor’s performance. Moreover, this study was retrospective and had a small sample size with missing data. The sixth study was designed to identify high-risk smokers to predict long-term lung cancer incidence (12 years) . In this paper, the authors developed a convolutional neural inception V4 network based on low-dose chest CT images, age, sex, and current versus former smoking statuses. The CNN was trained using patients from the PLCO trial and externally validated on data from the NLST randomized controlled trial (2456 women and 3037 men from 33 USA institutions). The model was also compared against PLCOm2012 to evaluate clinical utility, having exceeded its performance for all assessed metrics (AUC, sensitivity, specificity, PPV, and NPV). However, this study was retrospective, lacked ethnic diversity, and was not evaluated in a clinically realistic scenario. Additionally, information from symptomatic patients was unavailable due to using data from a screening trial. In the seventh article, a CNN-based model was developed for the automated detection and diagnosis of malignant pulmonary nodules on CECT scans . The algorithm was externally validated on four separate datasets with ethnic differences (three from South Korea and one from the USA, amounting to 693 patients and CTs). Furthermore, the diagnostic performance of 18 physicians (from non-radiologists to radiologists with 26 years of experience) was compared while assisted and not assisted by the algorithm for one dataset. The model achieved an excellent performance in the four tested datasets, outperforming all clinicians, and the professionals’ accuracy increased while aided by the model for all tested groups. Nonetheless, the model was undertrained for small nodules (< 1 cm) and trained only for malignant nodule detection for one type of CT (posterior-anterior projections), and the study was retrospective and not representative of a real-world clinical setting. The eighth algorithm consisted of a multilayer perceptron (Feed-Forward Neural Network) paired with a Cox proportional hazards model to predict cancer-specific survival for non-small cell lung cancer . The text-based model was trained using the SEER database and externally validated on patients from a Chinese tertiary pulmonary hospital (642 women, 540 men). It was compared against TNM-8, having outperformed it with statistical significance. Although tested with real-world clinical data, prospective multi-institutional studies are needed before the deep learning model can be used in clinical practice. The ninth article described developing, validating, and comparing three CNN models to differentiate between benign and malignant pulmonary ground-glass nodules (GGNs) . The first CNN only used CT images. The second CNN used clinical data: age, sex, and smoking history. The third was a fusion model combining CTs and clinical features, achieving the best performance. This model was temporally and geographically validated with 63 CT scans from 61 patients (39 women, 22 men). Its classification performance was compared against two radiologists (5 and 10 years of experience) for clinical utility assessment. Despite performing satisfactorily in external validation, the model was surpassed by both clinicians in accuracy, sensitivity, and NPV, only producing higher results for specificity and NPV. Furthermore, this study was retrospective, and validation was neither international nor evaluated in a correct clinical setting. In the tenth and final paper, a Neural Multitask Logistic Regression (N-MTLR) network was developed for survival risk stratification for stage III non-small cell lung cancer . The text-based deep learning system was trained on 16 613 patients from the SEER database and externally validated on subjects from a Chinese institution (172 patients, 39 women, 133 men). The results in the external dataset showed that the DSNN could predict survival outcomes more accurately than TNM-8 (AUC of 0.7439 vs. 0.561). The study results suggest that the deep learning system could be used for personalized treatment planning and stratification for patients with stage III non-small cell lung cancer. However, prospective studies in multi-institutional datasets are still required. Laryngeal, Mesothelial and Nasopharyngeal Cancers Three models were developed to assess tumors of other elements of the respiratory system. In , the authors trained a CNN (GoogLeNet Inception v3 network) with 13 721 raw endoscopic laryngeal images – including laryngeal cancer (LCA), precancerous laryngeal lesions (PRELCA), benign laryngeal tumors (BLT), and healthy tissue – from three Chinese institutions (1 816 patients). External validation was performed on 1 176 white-light endoscopic images from two additional institutions in the same country (392 patients), testing the model for binary classification – urgent (LCA and PRELCA) or non-urgent (BLT and healthy) – and between the four conditions. Predictions for both classification types were compared against three endoscopists (3, 3 to 10, and 10 to 20 years of experience). In two-way classification, the algorithm was less accurate than one endoscopist and less sensitive than two but outperformed all clinicians in four-way diagnostic accuracy. Still, this accuracy was relatively low (less than 80%), the study was retrospective, and all tested laryngoscopic images were obtained by the same type of standard endoscopes. Cancers of the mesothelium were approached in a single retrospective multi-center study . The paper uses DL to distinguish between two types of mesothelial cell proliferations: sarcomatoid malignant mesotheliomas (SMM) and benign spindle cell mesothelial proliferations (BSCMP). SMMs and BSCMPs are difficult to distinguish using traditional histopathological methods, resulting in misdiagnoses. The authors propose a new strategy—SpindleMesoNET—that uses an ensemble of a CNN and an RNN to analyze WSIs of H&E-stained mesothelial slides magnified 40 times. The model was trained on a Canadian dataset, externally validated on 39 images from 39 patients from a Chinese center, and compared against the diagnostic performance of three pathologists on a referral test set (40 WSIs from 40 patients). The accuracy and specificity of SpindleMesoNET on the referral set cases (92.5% and 100%, respectively) exceeded that of the three pathologists on the same slide set (91.7% and 96.5%). However, the pathologists were more sensitive than the diagnostic model (87.3% vs. 85.3%). In addition, the study had a minimal sample size, and only AUC was reported for the external validation dataset (0.989), which, although considerably high, is insufficient to assess the model’s effectiveness. The last study entailed developing and validating a CNN-based model to differentiate malignant carcinoma from benign nasopharyngeal lesions using white-light endoscopic images . Malignant conditions included lymphoma, rhabdomyosarcoma, olfactory neuroblastoma, malignant melanoma, and plasmacytoma. Benign subtypes encompassed precancerous or atypical hyperplasia, fibroangioma, leiomyoma, meningioma, minor salivary gland tumor, fungal infection, tuberculosis, chronic inflammation, adenoids or lymphoid hyperplasia, nasopharyngeal cyst, and foreign body. The model was trained on 27 536 images collected retrospectively (7 951 subjects) and temporally (prospectively) externally validated with 1 430 images (from 355 patients) from the same Chinese institution. Diagnostic performance was compared against 14 endoscopists: (i) three experts with more than five years of experience; (ii) eight residents with one year of experience; and (iii) interns with less than three months of experience. Except for the interns’ sensitivity, the model’s diagnostic performance surpassed the endoscopists in all tested metrics. However, data were collected from a single tertiary institution, and more malignancies should be included. Although not developed for the same cancer type, the two cancer detection studies for the larynx and nasopharynx are comparable due to using white-light endoscopic images. Both used CNNs and involved more than 300 patients and 1000 images, but the optimal diagnostic performance – although less sensitive (72% vs. 90.2% in ) – was achieved for the GoogLeNet Inception v3 network CNN with an AUC of 0.953, an accuracy of 89.7%, and a specificity of 94.8%, enhancing the value of pre-training CNNs. Skeletal system Four studies using different imaging techniques were designed to diagnose bone cancers, producing an average AUC of 0.88 (Table ). The first two radiomics-based models were developed for the binary classification of atypical cartilaginous tumors (ACT) and appendicular chondrosarcomas (CS) . In , a LogitBoost algorithm was temporally and geographically validated on 36 PET-CT scans from 23 women and 13 men. Besides externally validating their method, the authors evaluated clinical utility by comparing its diagnostic performance against a radiologist. The model performed satisfactorily in all calculated metrics (AUC, accuracy, sensitivity, PPV, and F1-score), but its accuracy was lower than the radiologist. In addition, only non-contrast PET-CT scans were included in the analyses. In the following year, research performed by the same first author evaluated bone tumor diagnosis from MRI scans . Radiomic features were extracted from T1-weighted MRI scans, and an ExtraTrees algorithm was trained to classify the tumors. On an external validation dataset of 65 images (34 women, 31 men), the model achieved a PPV, sensitivity, and F1-score of 92%, 98%, and 0.95 in classifying ACTs, while 94%, 80%, and 86% for the classification of grade II CS of long bones, respectively (weighted average is presented in Table ). The model's classification performance was compared against an experienced radiologist (with 35 years of experience) to assess clinical utility, finding that it could not match the radiologist's performance. Using SHAP, it was also found that certain radiomic features, such as the mean and standard deviation of gradient magnitude and entropy, significantly differed between the two tumor types. Drawbacks include the study’s retrospective nature, using only one type of MRI, and over-representing appendicular chondrosarcomas compared to cartilaginous tumors in the study population. The second set of papers used neural networks to differentiate benign from malignant bone tumors from X-ray images . On the one hand, in , a CNN (EfficientNet-B0) was developed on a dataset of 2899 radiographic images from 1356 patients with primary bone tumors from 5 institutions (3 for training, 2 for validation), including benign (1523 images, 679 patients), intermediate (635 images, 317 patients), and malignant (741 images, 360 patients) growths. The CNN model was developed for binary (benign versus not benign and malignant versus not malignant) and three-way (benign versus intermediate versus malignant) tumor classification. The authors also compared the model’s triple-way classification performance against two musculoskeletal subspecialists with 25 and 23 years of experience and three junior radiologists with 6, 1, and 7 years of experience. The deep learning algorithm had similar accuracy to the subspecialists and better performance than junior radiologists. However, only a modest number of patients was used for validation (639 X-rays from 291 patients), tumor classes were unbalanced (smaller number of benign bone tumors compared to intermediate and malignant), and the pipeline was not fully automated. In contrast, other authors resorted to a non-deep ANN that uses radiomic features extracted from X-ray images and demographic data to classify and differentiate malignant and benign bone tumors . The ANN was developed on 880 patients with the following conditions: (i) malignant tumors: chondrosarcoma, osteosarcoma, Ewing’s sarcoma, plasma cell myeloma, non-Hodgkin lymphoma B cell, and chordoma; (ii) benign subtypes: osteochondroma, enchondroma, chondroblastoma, osteoid osteoma, giant cell tumor, non-ossifying fibroma, haemangioma, aneurysmal bone cyst, simple bone cyst, fibrous dysplasia. The method was externally validated on 96 patients from a different institution, and performance was compared against four radiologists (two residents and two specialized). The model was more sensitive than both radiologist groups but was outperformed by the specialized radiologists in accuracy and specificity. In addition, the model requires manual segmentations and can only distinguish between benign and malignant tumors and not specific subtypes. Three retrospective studies were developed to diagnose brain cancers using MRI scans, amounting to 1 084 patients and 64 459 images, resulting in an average sensitivity of 81.97% and specificity of 91.63 (Table ) . The first involved the following conditions: acoustic neuroma, pituitary tumor, epidermoid cyst, meningioma, paraganglioma, craniopharyngioma, glioma, hemangioblastoma, metastatic tumor, germ cell tumor, medulloblastoma, chordoma, lymphomas, choroid plexus, papilloma, gangliocytoma, dysembryoplastic neuroepithelial tumor, and hemangiopericytoma . The CNN-based model was trained on images from 37 871 patients and externally validated using 64 414 T1-weighted, T2-weighted, and T1c MRI scans from 1039 subjects (600 female, 349 male) from three institutions. Its diagnostic performance was compared against nine neuroradiologists (5 to 20 years of experience) to assess clinical utility. This CNN classified brain tumors with high accuracy, sensitivity, and specificity, performing particularly well in identifying gliomas, which are difficult to diagnose using traditional imaging methods. When aided by the model, the neuroradiologists' accuracy increased by 18.9%, which was still lower than the model alone. AI assistance also boosted the neuroradiologists' sensitivity, specificity, and PPV. However, only three types of scans were considered, training data was obtained from a single center, and few rare tumors were included. In the second paper, the authors explored the combination of 9 different ML models – NB, logistic regression, SVM with a polynomial kernel, kNN (k = 3), DT, MLP, RF, AdaBoost, and bootstrap aggregating – to distinguish between different types of brain tumors (glioblastoma, anaplastic glioma, meningioma, primary central nervous system lymphoma, and brain metastasis) . MRI techniques were analyzed in a combination of 135 classifiers and radiomics: cMRI, advMRI, phyMRI, cMRI + phyMRI, and advMRI + phyMRI. A dataset of 167 patients was used for training, and temporal validation was performed on 20 subjects. Physiological MRI scans (phyMRI), named radiophysiomics, achieved the best results using AdaBoost with cMRI and phyMRI and RF with phyMRI. Both models surpassed the radiologists in AUC and F1-score but were outperformed in sensitivity and specificity. The AdaBoost model also had a higher PPV than the clinicians. However, this was a single-center, retrospective study, and the application and tuning of the models were performed manually. The third study evaluated the usefulness of preoperative contrast-enhanced T1- and T2-weighted MRI in differentiating low-grade gliomas (LGG) from glioblastomas (GBM) . The authors trained a radiomics-based RF classifier on 142 patients from 8 American centers and externally validated it on 25 patients from another institution (all from The Cancer Imaging Archive). The results showed that the machine learning algorithm was highly accurate in differentiating between GBM and LGG based on preoperative contrast-enhanced MRI scans, surpassing two neuroradiologists (15 and 1 year of experience) and a radiologist (3 years of experience). However, few patients from a public database were collected, possibly resulting in selection bias (non-random selection). Malignancies of the digestive system – highlighted in Table – were the most comprehensively studied (17/56, 30.4%), encompassing colorectal ( n = 7, 41.2%), esophageal ( n = 3, 17.6%), gastric ( n = 5, 29.4%), and liver ( n = 2, 11.8%) cancers. Colorectal Cancer Three sets of articles addressed colorectal cancers (7 papers). The goal of the first set, consisting of four multi-institutional retrospective studies, was its diagnosis, averaging a sensitivity of 77.3% and a specificity of 93.2% for tests on 995 images from different sources . The authors in developed an ensemble of three CNNs (Inception-v3, ResNet-50, and DenseNet-161) to predict the histology of colorectal neoplasms based on white light colonoscopic images. The ensemble model transferred knowledge from digital photography and learned with colonoscopic images to classify the images into one of 4 different pathologic categories: normal (healthy), adenoma with low-grade dysplasia (A-LGD), adenoma with high-grade dysplasia (A-HGD), and adenocarcinoma. The system's diagnostic performance was compared against four experts (more than five years of experience) and six trainees (less than two years). In the external validation dataset (400 images, 100 of each type), the CNN-CAD model achieved high accuracy in predicting the histology of the lesions. Compared to endoscopists, the model's performance was slightly better than the experts' and significantly outperformed the trainees. In addition, the authors used Grad-CAM to create a heatmap highlighting the regions of the input image that were most relevant to the network's decision. However, only one image per polyp was used; consequently, tumors that cannot be contained within a single image were neglected. The second work concerns the external validation and clinical utility assessment of EndoBRAIN, an AI-assisted system to classify colorectal polyps into malignant or non-malignant. EndoBRAIN was trained with 69 142 endocytoscopic images from patients with colorectal polyps from five academic centers in Japan. Its clinical validity had previously been confirmed in a single-center prospective study. However, since its implementation depends on governmental regulatory approval, the current study compared EndoBRAIN's diagnostic performance against 30 endoscopists (20 trainees, 10 experts) using stained and narrow-band endocytoscopic images in a web-based trial. The authors found their CADx tool accurately differentiated neoplastic from non-neoplastic lesions, outperforming all endoscopists for stained images, achieving similar performance in narrow-band images, and being accepted for clinical use. The third diagnostic model concerns the development of a deep learning model to predict the revised Vienna Classification in colonoscopy, which categorizes colorectal neoplasms into different levels of malignancy using standard endoscopic colonoscopy images . Several CNN architectures were compared, namely AlexNet, ResNet152, and EfficientNet-B8, with ResNet152 being chosen as the prediction model due to its higher accuracy and fastest inference time. The model was trained using 56,872 colonoscopy images (6775 lesions) and validated on 255 images (128 lesions) from 7 external institutions in Japan. The authors also compared diagnostic performance against endoscopists (five novices, three fellows, and four experts). The AI system’s sensitivity and specificity exceeded that of all endoscopists. Nevertheless, the model cannot discriminate between high-grade dysplasia and invasive cancer (categories 4 and 5 of the revised Vienna Classification), and only binary classification is supported. In the fourth document, the authors tested two pre-trained radiomics-based CNN architectures (Inception-ResNet-v2 and ResNet-152) to classify colorectal neoplasms into three types of sets automatically: 7-class (T1-4 colorectal cancer, high-grade dysplasia, tubular adenoma, vs. non-neoplasms), 4-class (neoplastic vs. non-neoplastic – advanced vs. early CRC vs. adenoma vs. healthy), and 2-class (neoplastic versus non-neoplastic and advanced versus non-advanced lesions) . The CNNs were trained on a South Korean dataset (3453 colonoscopy images, 1446 patients) and temporally and geographically validated on 240 images (and as many patients) from another institution. CAM was used to highlight its decisions. The best-performing architecture was ResNet-152 for 7-way and 4-way diagnoses, but Inception-ResNet-v2 achieved better results on binary classifications. In addition, the model's performance was compared with one novice and two experienced endoscopists with six months and more than five years of colonoscopy experience, respectively. Although resulting in high accuracy, neither CNN architecture could outperform the endoscopists. Furthermore, this retrospective study only considered three types of diseases and white-light colonoscopy images. The second set of articles was devoted to predicting outcomes from MRI scans in patients with colorectal cancer undergoing neoadjuvant chemotherapy (NCRT), accruing 143 MRIs from 118 patients and a mean AUC and accuracy of 0.77 and 81.9%, respectively . The first was a prospective study using a multipath CNN on MRI scans (diffusion kurtosis and T2-weighted) . The authors used a dataset of 412 patients (290 for development and 93 for temporal validation) with locally advanced rectal adenocarcinoma scheduled for NCRT. The researchers developed three multipath CNN-based models: one to preoperatively predict pathologic complete response (pCR) to neoadjuvant chemoradiotherapy, one to assess tumor regression grade (TRG) (TRG0 and TRG1 vs. TRG2 and TRG3), and one to predict T downstaging. In addition, the authors evaluated the models' utility by comparing two radiologists' – with 10 and 15 years of experience – performance with and without their assistance. The results showed excellent performance in predicting pCR, superior to the assessment by the two radiologists, whose error rate was also reduced when assisted by the DL model. Although with lower performance, the TRG and T downstaging models also achieved promising results with an AUC of 0.70 and 0.79, respectively (although not outperforming the clinicians). Nevertheless, this monoinstitutional research required manual delineation, and interobserver variability was not analyzed. Moreover, further validation studies are necessary to assess performance with different MRI scanners. The second group of researchers developed an MRI-based CNN (DC3CNN) to predict tumor regression grade (assessment of tumor size) in response to NCRT in patients with colorectal liver metastases . The authors used prospective internal (328 lesions from 155 patients) and retrospective external cohorts (61 images, 25 patients) to collect pre and post-treatment T2-weighted- and DW-MRI scans. The model surpassed the diagnostic accuracy of RECIST, the most commonly used criteria for clinical evaluation of solid tumor response to chemotherapy. However, the study was retrospective, and further studies are needed to validate its performance in larger ethnically diverse patient populations. Lastly, only one model assessed postoperative survival of colorectal cancer using text-based data . The model was trained on the SEER database (364 316 patients) and externally validated (temporally and ethnically) on a Korean dataset (1 572 subjects, 607 women, 965 men). The authors compared 4 ML algorithms, namely logistic regression, DTs, RFs, and LightGBM, to obtain an optimal prognostic model. The best-performing model – LightGBM – outperformed TNM-7 in predicting survival for all tested periods (1, 2, 3, 4, 5, 6, 8, and 10 years). Still, data were collected retrospectively from a public database and a single institution using only text-based data, so prospective studies are necessary, and clinicopathological, molecular, and radiologic variables should also be incorporated. Esophageal Cancer Three studies involved esophageal cancers. Two papers studied neoplasia detection in patients with Barrett’s esophagus, a medical condition resulting from long-term acid-reflux damage, causing esophageal tissue lining to thicken and become irritated, increasing cancer risk . The same group of researchers conducted both studies: the first paper describes model development for detection , while the second encompasses its tuning and update to include location . The authors proposed a multi-stage pretraining approach that involved training a CNN learning model on 494,355 gastrointestinal images before fine-tuning it on a smaller dataset of medical images specific to Barrett's neoplasia. The model was trained with images from different endoscopes. In the first paper , using data from separate institutions, the authors used a retrospective dataset of early Barrett’s neoplasia for primary validation (80 patients, unknown proportion) and a second prospectively acquired dataset (80 patients and images) to compare their model’s performance against fifty-three endoscopists (17 seniors, 8 juniors, 18 fellows, and 10 novices). In the second paper, the researchers validated their model on three prospective datasets: one with clinically representative images (80 individuals), one with subtle lesions (80 subjects), and one in a live setting with dysplastic and nondysplastic patients (ten each) . It showed excellent performance on the three external validation datasets, and its detection and location performances were also compared against the 53 experienced endoscopists on the subtle lesions. The CAD system outperformed all 53 endoscopists for all tested metrics in both papers, obtaining an average accuracy, sensitivity, and specificity of 87.9%, 91.7%, and 84.16%, respectively. The models developed in both articles performed similarly and were tested in clinically realistic scenarios, with an average accuracy, sensitivity, and specificity of 88.45%, 91.25%, and 85.63%, respectively, enhancing CNNs’ predictive power. Additionally, a retrospective study evaluated cancer-specific survival for esophageal adenocarcinoma and squamous cell carcinoma according to individual treatment recommendations . The authors trained a deep-, regression-, and text-based survival neural network (DeepSurv, multi-layer perceptron) using the SEER database (6855 patients) and validated it on 150 women and 350 men from their institution (China). Additionally, prognostic performance was compared against TNM-8, having exceeded it. However, only one medical center was used, and research was not performed in an accurately representative clinical setting. Gastric Cancer In five articles, models were developed for gastric-related tasks. The first three studies had a diagnostic component. In the first research, the authors developed two models – GastroMIL and MIL-GC –, training them on WSIs from H&E slides magnified 30 times collected from TCGA and a Chinese institution. They also temporally and geographically validated them with 175 WSIs from 91 patients from NHGRP . GastroMIL used an ensemble of a CNN and an RNN to distinguish gastric cancer from normal gastric tissue images. Its performance was compared against one junior and three expert pathologists. MIL-GC, a regression-based model, was created to predict patients’ overall survival. Besides WSIs, MIL-GC uses clinical data, namely survival state, overall survival time, age, sex, tumor size, neoplasm histologic grade, and pathologic T, N, M, and TNM-8 stages. The deep learning models achieved high performance in both tasks, with an overall accuracy of 92% for diagnosis and a C-index of 0.657 for prognosis prediction in the external dataset. Compared to human performance, GastroMIL outperformed the junior pathologist in accuracy and sensitivity but was surpassed by the experienced pathologists (in accuracy, sensitivity, and specificity). However, the tested cohorts were retrospective and had unbalanced survival times, and clinical utility was not evaluated for the prognostic model. The second study used a CNN (ResNet-50) for real-time gastric cancer diagnosis . The model was developed with 3 407 endoscopic images of 666 patients with gastric lesions from two institutions. The DCNN model was tested on a temporally different dataset of endoscopic videos from a separate institution (54 videos from 54 patients), and performance was compared against 20 endoscopists (6 experts, 14 novices). The model achieved better performance than any of the endoscopists, and diagnostic accuracy, sensitivity, and specificity increased for all clinicians while assisted by the model. Nevertheless, despite decreasing the aggregate diagnostic time from 4.35 s to 3.01 s, it increased experts’ by 0.10 s. In addition, the diagnostic model was only tested on high-quality images, and the validation dataset was small and domestic. Although slightly less sensitive than Gastro-MIL (93.2% vs. 93.4%), the model developed in achieved the best accuracy and sensitivity, evidencing that endoscopic images and videos might be more appropriate to diagnose gastric cancer. The third model was created using endoscopic ultrasonography images (EUS) for the differential diagnosis of gastric mesenchymal tumors, including GISTs, leiomyomas, and schwannomas . This model was trained with EUS from three Korean institutions and tested on a temporally separate set of 212 images from the same centers (69 patients, 38 female, 31 male). A sequential analysis approach was adopted using two CNNs: the first classifies the tumor as GIST or non-GIST; for non-GISTs, the second CNN classifies it as either a leiomyoma or schwannoma. The results were compared against junior ( n = 3, less than 200 examinations) and expert endoscopists ( n = 3, more than 500 examinations) who evaluated the same images, having surpassed them in both types of classification. However, this study was retrospective and involved a small number of patients, and the types of equipment used to perform ultrasounds varied considerably across the facilities. The last two papers concerned outcome predictions. The first presents a multi-institutional study that uses multitask deep learning to predict peritoneal recurrence and disease-free survival in gastric cancer patients after curative-intent surgery based on CT images . Supervised contrastive learning and a dynamic convolutional neural network were combined to achieve this purpose, and Grad-CAM was used to explain the model’s decisions. The model included CT scans from three patient cohorts, and external validation included 1 043 patients (329 women, 714 men) and as many images from another Chinese institution. In addition, the authors investigated clinician performance for peritoneal recurrence prediction with and without the assistance of the AI model, having found that performance was significantly enhanced after integrating it and that the model alone surpassed all physicians. Nonetheless, only East Asian patients were included in this retrospective study, which was not performed in a real clinical setting, and sensitivity was only reported for one of the clinicians. The last study discusses the use of CT radiomics to predict the response of advanced gastric cancer to neoadjuvant chemotherapy and to detect pathological downstaging at an early stage . The authors trained two SVCs on 206 patients who had undergone three or four cycles of chemotherapy and externally validated them on two testing cohorts, which were also used for benchmarking detection against RECIST. The first testing cohort consists of temporal validation (40 patients and CTs, 13 women, 27 men), while the second differs in the number of chemotherapy cycles (46 individuals and CTs, 10 women, 36 men). Performance for the detection model surpassed RECIST in both cohorts, and, except for sensitivity, the response prediction model also produced positive results. However, retrospective data and a small, unbalanced sample size constrain this study, which was not evaluated in a clinically representative setting. Liver Cancer Two models were developed for liver cancer-related predictions. The first aimed at classifying hepatocellular carcinomas and cholangiocarcinomas (differential diagnosis) . The authors developed a web-based (cloud-deployed AI model and browser-based interface) CNN (DenseNet architecture) using WSIs from H&E slides magnified 40 times and used Grad-CAM to increase the model’s explainability. The training dataset was obtained from TCGA (70 slides from 70 unique patients). The external validation dataset was collected from the Department of Pathology at Stanford University Medical Center (80 slides from 24 women and 56 men). The model achieved a diagnostic accuracy of 84.2% in the validation cohort. Diagnostic performance was also compared to that of 11 pathologists. Except for the two unspecified pathologists, performance (AUC) increased for all clinicians when assisted by this tool. However, the pathologists only had access to the WSIs (as opposed to being complemented with clinical data), the model required manual intervention for patch selection, and the study was retrospective with a small sample size (development and external validation with a total of 150 WSIs and patients). The second model was designed to predict three-year overall survival for intrahepatic cholangiocarcinoma patients after undergoing hepatectomy using an ensemble of Random Forests, XGBoost, and GBDT . Using a single quaternary Chinese institution, the authors collected 1390 patients for training and 42 patients (12 women, 30 men) for external temporal validation. Results were compared against the TNM-8 and LCSGJ staging systems, with model performance exceeding that of the routinely used tools. Nonetheless, this was a monoinstitutional endeavor limited to a small number of Asian patients. Furthermore, only six prognostic factors were used: carcinoembryonic antigen, carbohydrate antigen 19–9, alpha-fetoprotein, pre-albumin, and T and N stages. Three sets of articles addressed colorectal cancers (7 papers). The goal of the first set, consisting of four multi-institutional retrospective studies, was its diagnosis, averaging a sensitivity of 77.3% and a specificity of 93.2% for tests on 995 images from different sources . The authors in developed an ensemble of three CNNs (Inception-v3, ResNet-50, and DenseNet-161) to predict the histology of colorectal neoplasms based on white light colonoscopic images. The ensemble model transferred knowledge from digital photography and learned with colonoscopic images to classify the images into one of 4 different pathologic categories: normal (healthy), adenoma with low-grade dysplasia (A-LGD), adenoma with high-grade dysplasia (A-HGD), and adenocarcinoma. The system's diagnostic performance was compared against four experts (more than five years of experience) and six trainees (less than two years). In the external validation dataset (400 images, 100 of each type), the CNN-CAD model achieved high accuracy in predicting the histology of the lesions. Compared to endoscopists, the model's performance was slightly better than the experts' and significantly outperformed the trainees. In addition, the authors used Grad-CAM to create a heatmap highlighting the regions of the input image that were most relevant to the network's decision. However, only one image per polyp was used; consequently, tumors that cannot be contained within a single image were neglected. The second work concerns the external validation and clinical utility assessment of EndoBRAIN, an AI-assisted system to classify colorectal polyps into malignant or non-malignant. EndoBRAIN was trained with 69 142 endocytoscopic images from patients with colorectal polyps from five academic centers in Japan. Its clinical validity had previously been confirmed in a single-center prospective study. However, since its implementation depends on governmental regulatory approval, the current study compared EndoBRAIN's diagnostic performance against 30 endoscopists (20 trainees, 10 experts) using stained and narrow-band endocytoscopic images in a web-based trial. The authors found their CADx tool accurately differentiated neoplastic from non-neoplastic lesions, outperforming all endoscopists for stained images, achieving similar performance in narrow-band images, and being accepted for clinical use. The third diagnostic model concerns the development of a deep learning model to predict the revised Vienna Classification in colonoscopy, which categorizes colorectal neoplasms into different levels of malignancy using standard endoscopic colonoscopy images . Several CNN architectures were compared, namely AlexNet, ResNet152, and EfficientNet-B8, with ResNet152 being chosen as the prediction model due to its higher accuracy and fastest inference time. The model was trained using 56,872 colonoscopy images (6775 lesions) and validated on 255 images (128 lesions) from 7 external institutions in Japan. The authors also compared diagnostic performance against endoscopists (five novices, three fellows, and four experts). The AI system’s sensitivity and specificity exceeded that of all endoscopists. Nevertheless, the model cannot discriminate between high-grade dysplasia and invasive cancer (categories 4 and 5 of the revised Vienna Classification), and only binary classification is supported. In the fourth document, the authors tested two pre-trained radiomics-based CNN architectures (Inception-ResNet-v2 and ResNet-152) to classify colorectal neoplasms into three types of sets automatically: 7-class (T1-4 colorectal cancer, high-grade dysplasia, tubular adenoma, vs. non-neoplasms), 4-class (neoplastic vs. non-neoplastic – advanced vs. early CRC vs. adenoma vs. healthy), and 2-class (neoplastic versus non-neoplastic and advanced versus non-advanced lesions) . The CNNs were trained on a South Korean dataset (3453 colonoscopy images, 1446 patients) and temporally and geographically validated on 240 images (and as many patients) from another institution. CAM was used to highlight its decisions. The best-performing architecture was ResNet-152 for 7-way and 4-way diagnoses, but Inception-ResNet-v2 achieved better results on binary classifications. In addition, the model's performance was compared with one novice and two experienced endoscopists with six months and more than five years of colonoscopy experience, respectively. Although resulting in high accuracy, neither CNN architecture could outperform the endoscopists. Furthermore, this retrospective study only considered three types of diseases and white-light colonoscopy images. The second set of articles was devoted to predicting outcomes from MRI scans in patients with colorectal cancer undergoing neoadjuvant chemotherapy (NCRT), accruing 143 MRIs from 118 patients and a mean AUC and accuracy of 0.77 and 81.9%, respectively . The first was a prospective study using a multipath CNN on MRI scans (diffusion kurtosis and T2-weighted) . The authors used a dataset of 412 patients (290 for development and 93 for temporal validation) with locally advanced rectal adenocarcinoma scheduled for NCRT. The researchers developed three multipath CNN-based models: one to preoperatively predict pathologic complete response (pCR) to neoadjuvant chemoradiotherapy, one to assess tumor regression grade (TRG) (TRG0 and TRG1 vs. TRG2 and TRG3), and one to predict T downstaging. In addition, the authors evaluated the models' utility by comparing two radiologists' – with 10 and 15 years of experience – performance with and without their assistance. The results showed excellent performance in predicting pCR, superior to the assessment by the two radiologists, whose error rate was also reduced when assisted by the DL model. Although with lower performance, the TRG and T downstaging models also achieved promising results with an AUC of 0.70 and 0.79, respectively (although not outperforming the clinicians). Nevertheless, this monoinstitutional research required manual delineation, and interobserver variability was not analyzed. Moreover, further validation studies are necessary to assess performance with different MRI scanners. The second group of researchers developed an MRI-based CNN (DC3CNN) to predict tumor regression grade (assessment of tumor size) in response to NCRT in patients with colorectal liver metastases . The authors used prospective internal (328 lesions from 155 patients) and retrospective external cohorts (61 images, 25 patients) to collect pre and post-treatment T2-weighted- and DW-MRI scans. The model surpassed the diagnostic accuracy of RECIST, the most commonly used criteria for clinical evaluation of solid tumor response to chemotherapy. However, the study was retrospective, and further studies are needed to validate its performance in larger ethnically diverse patient populations. Lastly, only one model assessed postoperative survival of colorectal cancer using text-based data . The model was trained on the SEER database (364 316 patients) and externally validated (temporally and ethnically) on a Korean dataset (1 572 subjects, 607 women, 965 men). The authors compared 4 ML algorithms, namely logistic regression, DTs, RFs, and LightGBM, to obtain an optimal prognostic model. The best-performing model – LightGBM – outperformed TNM-7 in predicting survival for all tested periods (1, 2, 3, 4, 5, 6, 8, and 10 years). Still, data were collected retrospectively from a public database and a single institution using only text-based data, so prospective studies are necessary, and clinicopathological, molecular, and radiologic variables should also be incorporated. Three studies involved esophageal cancers. Two papers studied neoplasia detection in patients with Barrett’s esophagus, a medical condition resulting from long-term acid-reflux damage, causing esophageal tissue lining to thicken and become irritated, increasing cancer risk . The same group of researchers conducted both studies: the first paper describes model development for detection , while the second encompasses its tuning and update to include location . The authors proposed a multi-stage pretraining approach that involved training a CNN learning model on 494,355 gastrointestinal images before fine-tuning it on a smaller dataset of medical images specific to Barrett's neoplasia. The model was trained with images from different endoscopes. In the first paper , using data from separate institutions, the authors used a retrospective dataset of early Barrett’s neoplasia for primary validation (80 patients, unknown proportion) and a second prospectively acquired dataset (80 patients and images) to compare their model’s performance against fifty-three endoscopists (17 seniors, 8 juniors, 18 fellows, and 10 novices). In the second paper, the researchers validated their model on three prospective datasets: one with clinically representative images (80 individuals), one with subtle lesions (80 subjects), and one in a live setting with dysplastic and nondysplastic patients (ten each) . It showed excellent performance on the three external validation datasets, and its detection and location performances were also compared against the 53 experienced endoscopists on the subtle lesions. The CAD system outperformed all 53 endoscopists for all tested metrics in both papers, obtaining an average accuracy, sensitivity, and specificity of 87.9%, 91.7%, and 84.16%, respectively. The models developed in both articles performed similarly and were tested in clinically realistic scenarios, with an average accuracy, sensitivity, and specificity of 88.45%, 91.25%, and 85.63%, respectively, enhancing CNNs’ predictive power. Additionally, a retrospective study evaluated cancer-specific survival for esophageal adenocarcinoma and squamous cell carcinoma according to individual treatment recommendations . The authors trained a deep-, regression-, and text-based survival neural network (DeepSurv, multi-layer perceptron) using the SEER database (6855 patients) and validated it on 150 women and 350 men from their institution (China). Additionally, prognostic performance was compared against TNM-8, having exceeded it. However, only one medical center was used, and research was not performed in an accurately representative clinical setting. In five articles, models were developed for gastric-related tasks. The first three studies had a diagnostic component. In the first research, the authors developed two models – GastroMIL and MIL-GC –, training them on WSIs from H&E slides magnified 30 times collected from TCGA and a Chinese institution. They also temporally and geographically validated them with 175 WSIs from 91 patients from NHGRP . GastroMIL used an ensemble of a CNN and an RNN to distinguish gastric cancer from normal gastric tissue images. Its performance was compared against one junior and three expert pathologists. MIL-GC, a regression-based model, was created to predict patients’ overall survival. Besides WSIs, MIL-GC uses clinical data, namely survival state, overall survival time, age, sex, tumor size, neoplasm histologic grade, and pathologic T, N, M, and TNM-8 stages. The deep learning models achieved high performance in both tasks, with an overall accuracy of 92% for diagnosis and a C-index of 0.657 for prognosis prediction in the external dataset. Compared to human performance, GastroMIL outperformed the junior pathologist in accuracy and sensitivity but was surpassed by the experienced pathologists (in accuracy, sensitivity, and specificity). However, the tested cohorts were retrospective and had unbalanced survival times, and clinical utility was not evaluated for the prognostic model. The second study used a CNN (ResNet-50) for real-time gastric cancer diagnosis . The model was developed with 3 407 endoscopic images of 666 patients with gastric lesions from two institutions. The DCNN model was tested on a temporally different dataset of endoscopic videos from a separate institution (54 videos from 54 patients), and performance was compared against 20 endoscopists (6 experts, 14 novices). The model achieved better performance than any of the endoscopists, and diagnostic accuracy, sensitivity, and specificity increased for all clinicians while assisted by the model. Nevertheless, despite decreasing the aggregate diagnostic time from 4.35 s to 3.01 s, it increased experts’ by 0.10 s. In addition, the diagnostic model was only tested on high-quality images, and the validation dataset was small and domestic. Although slightly less sensitive than Gastro-MIL (93.2% vs. 93.4%), the model developed in achieved the best accuracy and sensitivity, evidencing that endoscopic images and videos might be more appropriate to diagnose gastric cancer. The third model was created using endoscopic ultrasonography images (EUS) for the differential diagnosis of gastric mesenchymal tumors, including GISTs, leiomyomas, and schwannomas . This model was trained with EUS from three Korean institutions and tested on a temporally separate set of 212 images from the same centers (69 patients, 38 female, 31 male). A sequential analysis approach was adopted using two CNNs: the first classifies the tumor as GIST or non-GIST; for non-GISTs, the second CNN classifies it as either a leiomyoma or schwannoma. The results were compared against junior ( n = 3, less than 200 examinations) and expert endoscopists ( n = 3, more than 500 examinations) who evaluated the same images, having surpassed them in both types of classification. However, this study was retrospective and involved a small number of patients, and the types of equipment used to perform ultrasounds varied considerably across the facilities. The last two papers concerned outcome predictions. The first presents a multi-institutional study that uses multitask deep learning to predict peritoneal recurrence and disease-free survival in gastric cancer patients after curative-intent surgery based on CT images . Supervised contrastive learning and a dynamic convolutional neural network were combined to achieve this purpose, and Grad-CAM was used to explain the model’s decisions. The model included CT scans from three patient cohorts, and external validation included 1 043 patients (329 women, 714 men) and as many images from another Chinese institution. In addition, the authors investigated clinician performance for peritoneal recurrence prediction with and without the assistance of the AI model, having found that performance was significantly enhanced after integrating it and that the model alone surpassed all physicians. Nonetheless, only East Asian patients were included in this retrospective study, which was not performed in a real clinical setting, and sensitivity was only reported for one of the clinicians. The last study discusses the use of CT radiomics to predict the response of advanced gastric cancer to neoadjuvant chemotherapy and to detect pathological downstaging at an early stage . The authors trained two SVCs on 206 patients who had undergone three or four cycles of chemotherapy and externally validated them on two testing cohorts, which were also used for benchmarking detection against RECIST. The first testing cohort consists of temporal validation (40 patients and CTs, 13 women, 27 men), while the second differs in the number of chemotherapy cycles (46 individuals and CTs, 10 women, 36 men). Performance for the detection model surpassed RECIST in both cohorts, and, except for sensitivity, the response prediction model also produced positive results. However, retrospective data and a small, unbalanced sample size constrain this study, which was not evaluated in a clinically representative setting. Two models were developed for liver cancer-related predictions. The first aimed at classifying hepatocellular carcinomas and cholangiocarcinomas (differential diagnosis) . The authors developed a web-based (cloud-deployed AI model and browser-based interface) CNN (DenseNet architecture) using WSIs from H&E slides magnified 40 times and used Grad-CAM to increase the model’s explainability. The training dataset was obtained from TCGA (70 slides from 70 unique patients). The external validation dataset was collected from the Department of Pathology at Stanford University Medical Center (80 slides from 24 women and 56 men). The model achieved a diagnostic accuracy of 84.2% in the validation cohort. Diagnostic performance was also compared to that of 11 pathologists. Except for the two unspecified pathologists, performance (AUC) increased for all clinicians when assisted by this tool. However, the pathologists only had access to the WSIs (as opposed to being complemented with clinical data), the model required manual intervention for patch selection, and the study was retrospective with a small sample size (development and external validation with a total of 150 WSIs and patients). The second model was designed to predict three-year overall survival for intrahepatic cholangiocarcinoma patients after undergoing hepatectomy using an ensemble of Random Forests, XGBoost, and GBDT . Using a single quaternary Chinese institution, the authors collected 1390 patients for training and 42 patients (12 women, 30 men) for external temporal validation. Results were compared against the TNM-8 and LCSGJ staging systems, with model performance exceeding that of the routinely used tools. Nonetheless, this was a monoinstitutional endeavor limited to a small number of Asian patients. Furthermore, only six prognostic factors were used: carcinoembryonic antigen, carbohydrate antigen 19–9, alpha-fetoprotein, pre-albumin, and T and N stages. Three papers described prognostic models for cancers in organs affecting the endocrine system (pancreas and thymus), whose results are depicted in Table . Pancreatic Cancer The first two studies assessed survival for pancreatic ductal adenocarcinoma (PDAC) patients but adopted disparate research designs and clinical inputs . The first group of researchers used a regression-based random survival forest model to prognosticate patients with advanced pancreatic cancer . Aimed at predicting overall survival for patients with unresectable PDAC, the model was developed with clinical data and CT scans from a German institution (203 patients). It was temporally and geographically validated using only text-based clinical data from patients with liver metastases from the same country (8 women, 14 men) and compared against mGPS, having outperformed it. Additionally, the authors used SHAP to explain their model, finding that inflammatory markers C-reactive protein and neutrophil-to-lymphocyte ratio had the most significant influence on its decision-making. Nonetheless, only twenty national patients were used to validate the model externally, and different types of inputs were used for training and testing. The second set of authors used an ensemble of ML methods – ANN, logistic regression, RF, GB, SVM, and CNNs (3D ResNet-18, R(2 + 1)D-18, 3D ResNeXt-50, and 3D DenseNet-121) – to predict 2-year overall and 1-year recurrence-free survival for PDAC patients after surgical resection . The classifier was trained and tuned using 229 patients and temporally validated with CECT images and seventeen clinical variables from the same South Korean institution (53 CECTs from 27 women and 26 men). Grad-CAM was used to explain the model’s decisions, and comparisons were made against TMN-8 to evaluate clinical utility. Although more accurate, specific, and with a higher PPV than TNM-8, it was less sensitive for both predictions and had a lower NPV for overall survival prediction. Furthermore, tumor margins were manually segmented, and the model did not consider histopathologic data. Thymic Cancer One study was designed for the simplified risk categorization of thymic epithelial tumors (TETs), rare cancer forms . Here, three types of tumors were evaluated: low-risk thymoma (LRT), high-risk thymoma (HRT), and thymic carcinoma (TC). Three triple classification models were developed using radiomic features extracted from preoperative NECT images and clinical data from 433 patients: (i) LRT vs. HRT + TC; (ii) HRT vs. LRT + TC; (iii) TC vs. LRT + HRT. The authors compared several CT-based classifiers: logistic regression, linear SVC, Bernoulli and Gaussian Naïve Bayes, LDA, Stochastic Gradient Descent, SVM, DT, kNN, MLP, RF, AdaBoost, gradient boosting, and XGBoost. Combined with clinical data, the SVM model demonstrated the best performance for predicting the simplified TETs risk categorization. In addition, the SVM model was validated in a temporally different cohort using images from 5 types of scanners (76 scans and patients, 33 women, 48 men). Finally, its diagnostic performance was compared against three radiologists (3, 6, and 12 years of experience), having exceeded them regarding AUC (0.844 versus 0.645, 0.813, and 0.724) but not for other metrics (accuracy, sensitivity, and specificity). Caveats include the reduced amount of patients, low number of thymic carcinomas, and incomplete automation of the models. The first two studies assessed survival for pancreatic ductal adenocarcinoma (PDAC) patients but adopted disparate research designs and clinical inputs . The first group of researchers used a regression-based random survival forest model to prognosticate patients with advanced pancreatic cancer . Aimed at predicting overall survival for patients with unresectable PDAC, the model was developed with clinical data and CT scans from a German institution (203 patients). It was temporally and geographically validated using only text-based clinical data from patients with liver metastases from the same country (8 women, 14 men) and compared against mGPS, having outperformed it. Additionally, the authors used SHAP to explain their model, finding that inflammatory markers C-reactive protein and neutrophil-to-lymphocyte ratio had the most significant influence on its decision-making. Nonetheless, only twenty national patients were used to validate the model externally, and different types of inputs were used for training and testing. The second set of authors used an ensemble of ML methods – ANN, logistic regression, RF, GB, SVM, and CNNs (3D ResNet-18, R(2 + 1)D-18, 3D ResNeXt-50, and 3D DenseNet-121) – to predict 2-year overall and 1-year recurrence-free survival for PDAC patients after surgical resection . The classifier was trained and tuned using 229 patients and temporally validated with CECT images and seventeen clinical variables from the same South Korean institution (53 CECTs from 27 women and 26 men). Grad-CAM was used to explain the model’s decisions, and comparisons were made against TMN-8 to evaluate clinical utility. Although more accurate, specific, and with a higher PPV than TNM-8, it was less sensitive for both predictions and had a lower NPV for overall survival prediction. Furthermore, tumor margins were manually segmented, and the model did not consider histopathologic data. One study was designed for the simplified risk categorization of thymic epithelial tumors (TETs), rare cancer forms . Here, three types of tumors were evaluated: low-risk thymoma (LRT), high-risk thymoma (HRT), and thymic carcinoma (TC). Three triple classification models were developed using radiomic features extracted from preoperative NECT images and clinical data from 433 patients: (i) LRT vs. HRT + TC; (ii) HRT vs. LRT + TC; (iii) TC vs. LRT + HRT. The authors compared several CT-based classifiers: logistic regression, linear SVC, Bernoulli and Gaussian Naïve Bayes, LDA, Stochastic Gradient Descent, SVM, DT, kNN, MLP, RF, AdaBoost, gradient boosting, and XGBoost. Combined with clinical data, the SVM model demonstrated the best performance for predicting the simplified TETs risk categorization. In addition, the SVM model was validated in a temporally different cohort using images from 5 types of scanners (76 scans and patients, 33 women, 48 men). Finally, its diagnostic performance was compared against three radiologists (3, 6, and 12 years of experience), having exceeded them regarding AUC (0.844 versus 0.645, 0.813, and 0.724) but not for other metrics (accuracy, sensitivity, and specificity). Caveats include the reduced amount of patients, low number of thymic carcinomas, and incomplete automation of the models. Table illustrates the models developed for genitourinary cancers, including the bladder, cervix, prostate, and uterus. Bladder Cancer From the retrieved models, only one assesses outcomes for primary bladder cancers . This article presents a CNN-based strategy to predict the muscular invasiveness of bladder cancer based on CT images and clinical data. The model was developed with 183 patients. Its performance was tested on an independent institution's temporally and geographically different validation cohort of patients with urothelial carcinoma (13 women, 62 men, and as many images). The model’s predictions were juxtaposed with diagnoses from two radiologists with nine and two years of experience, having achieved better accuracy and specificity than the two clinicians but a lower sensitivity. Overall, the authors found that the deep learning algorithm achieved a high accuracy rate in predicting muscular invasiveness, an essential factor in determining the prognosis and treatment of bladder cancer. However, the study is limited by its retrospective nature, exclusion of tumors not visible in CT images, and small sample size. Cervical Cancer Similarly, primary tumors of the cervix were only screened in one paper . Here, the authors trained an ensemble of convolutional and recurrent neural networks on whole-slide images from patients' cervical biopsies and 79 911 annotations from five hospitals and five kinds of scanners. The system comprises (i) two CNNs – the first scans WSIs at low resolution and the second at high resolution – to identify and locate the ten most suspicious areas in each slide; (ii) and an RNN to predict corresponding probabilities. The system classifies squamous and glandular epithelial cell abnormalities as positive (neoplastic) and normal findings as negative for intraepithelial lesions or malignancies (non-neoplastic). The method was externally validated on multi-center independent test sets of 1 565 women (1 170 without additional conditions and 395 with HPV), and classification performance was compared against three cytopathologists. Although obtaining promising results and surpassing clinician performance for both types of women, the authors highlight that the model was designed for the general women population, implying that further refinements are required for specific comorbidities. Prostate Cancer Two models were developed for prostate-cancer-related classifications using multiparametric MRI scans . In the first paper, the authors describe the development of Autoprostate, a system employing deep learning to generate a report summarizing the probability of suspicious lesions qualifying as clinically significant prostate cancer (CSPCa) . The authors trained their approach on the PROSTATEx dataset (249 men), externally validated it on the PICTURE dataset (247 patients), and compared its reports (with post-thresholding and false positive reduction) to those generated by a radiologist with ten years of experience. The system achieved a high level of agreement with the human reports (surpassing the radiologist in AUC and specificity) and could accurately identify CSPCa. However, this study was retrospective, a single (public) dataset was used for external validation, and only two types of prostate lesions were considered. The second article presented an ML-based approach for prostate cancer risk stratification using radiomics applied to multiparametric MRI scans . In this retrospective, monoinstitutional study, the authors compared seven classification algorithms: logistic regression, linear, quadratic (Q), cubic, and Gaussian kernel-based SVM, linear discriminant analysis, and RF. After training with 68 patients, the best-performing method – QSVM – was validated on a temporally independent dataset (14 high- and 39 low-risk patients). Its performance was compared against PI-RADS v2, having found that the model could accurately predict the risk of clinically significant prostate cancer. Although the classifier performed equivalently to PI-RADS v2 regarding AUC, it performed substantially better in class-specific measures (F1-score, sensitivity, and PPV), especially for the high-risk class. However, the study is limited by its retrospective nature and small sample size from a single source. Uterine Cancer Two studies for primary cancers focused on classifying lesions of the endometrium, the layer of tissue lining the uterus . In the first article, using 245 women as the training cohort, the authors compared nine models – logistic regression (LR), SVM, stochastic gradient descent, kNN, DT, RF, ExtraTrees, XGBoost, and LightGBM – to obtain an optimal algorithm for differential diagnosis (malignant versus benign tumors) . A radiomics score (radscore) was computed for the best-performing algorithm (logistic regression), and four models were selected using different combinations of T1-weighted, T2-weighted, and DWI MRI features: (i) the radiomics model; (ii) a nomogram, combining the radscore and clinical predictive parameters; (iii) a two-tiered stacking model, where the first tier was the clinical model and the optimal radiomics model (LR), and the second tier used the output of the first tier as the input of the multivariate LR; and (iv) an ensemble model, where the predictions obtained from the preceding clinical model and radiomics model were calculated by an accuracy-weighted average. The results showed that all four models accurately differentiated stage IA endometrial cancer and benign endometrial lesions. Furthermore, during external validation (44 MRIs from 44 women), the authors found that the nomogram had a higher AUC than the radiomics model, revealing more stable discrimination efficiency and better generalizability than the stacking and ensemble models and a radiologist with 30 years of experience (except in sensitivity). Nevertheless, data was collected from two same-country centers (Chinese institutions), only standard radiomics features were extracted, and lesions were manually segmented, which is highly time-consuming. The second paper encompassed a global-to-local multi-scale CNN to diagnose endometrial hyperplasia and screen endometrial intraepithelial neoplasia (EIN) in histopathological images . The researchers trained the CNN using a large annotated dataset (6 248 images) and tested it on a temporally different set of patients (1631 images, 135 specimens, 102 women). They found that it performed well in diagnosing endometrial hyperplasia and detecting EIN, outperforming a junior pathologist (2 years of experience) and obtaining comparable performance to a mid-level and a senior pathologist (6 and 25 years of experience, respectively). The authors used Grad-CAM to emphasize the regions the model deemed relevant for diagnosis. However, this retrospective study only used histopathological images (as opposed to WSIs). Besides, it focused solely on classifying healthy slides, hyperplasia without atypia, and endometrial intraepithelial neoplasia, thus neglecting the differentiation between benign lesions and endometrial cancer. From the retrieved models, only one assesses outcomes for primary bladder cancers . This article presents a CNN-based strategy to predict the muscular invasiveness of bladder cancer based on CT images and clinical data. The model was developed with 183 patients. Its performance was tested on an independent institution's temporally and geographically different validation cohort of patients with urothelial carcinoma (13 women, 62 men, and as many images). The model’s predictions were juxtaposed with diagnoses from two radiologists with nine and two years of experience, having achieved better accuracy and specificity than the two clinicians but a lower sensitivity. Overall, the authors found that the deep learning algorithm achieved a high accuracy rate in predicting muscular invasiveness, an essential factor in determining the prognosis and treatment of bladder cancer. However, the study is limited by its retrospective nature, exclusion of tumors not visible in CT images, and small sample size. Similarly, primary tumors of the cervix were only screened in one paper . Here, the authors trained an ensemble of convolutional and recurrent neural networks on whole-slide images from patients' cervical biopsies and 79 911 annotations from five hospitals and five kinds of scanners. The system comprises (i) two CNNs – the first scans WSIs at low resolution and the second at high resolution – to identify and locate the ten most suspicious areas in each slide; (ii) and an RNN to predict corresponding probabilities. The system classifies squamous and glandular epithelial cell abnormalities as positive (neoplastic) and normal findings as negative for intraepithelial lesions or malignancies (non-neoplastic). The method was externally validated on multi-center independent test sets of 1 565 women (1 170 without additional conditions and 395 with HPV), and classification performance was compared against three cytopathologists. Although obtaining promising results and surpassing clinician performance for both types of women, the authors highlight that the model was designed for the general women population, implying that further refinements are required for specific comorbidities. Two models were developed for prostate-cancer-related classifications using multiparametric MRI scans . In the first paper, the authors describe the development of Autoprostate, a system employing deep learning to generate a report summarizing the probability of suspicious lesions qualifying as clinically significant prostate cancer (CSPCa) . The authors trained their approach on the PROSTATEx dataset (249 men), externally validated it on the PICTURE dataset (247 patients), and compared its reports (with post-thresholding and false positive reduction) to those generated by a radiologist with ten years of experience. The system achieved a high level of agreement with the human reports (surpassing the radiologist in AUC and specificity) and could accurately identify CSPCa. However, this study was retrospective, a single (public) dataset was used for external validation, and only two types of prostate lesions were considered. The second article presented an ML-based approach for prostate cancer risk stratification using radiomics applied to multiparametric MRI scans . In this retrospective, monoinstitutional study, the authors compared seven classification algorithms: logistic regression, linear, quadratic (Q), cubic, and Gaussian kernel-based SVM, linear discriminant analysis, and RF. After training with 68 patients, the best-performing method – QSVM – was validated on a temporally independent dataset (14 high- and 39 low-risk patients). Its performance was compared against PI-RADS v2, having found that the model could accurately predict the risk of clinically significant prostate cancer. Although the classifier performed equivalently to PI-RADS v2 regarding AUC, it performed substantially better in class-specific measures (F1-score, sensitivity, and PPV), especially for the high-risk class. However, the study is limited by its retrospective nature and small sample size from a single source. Two studies for primary cancers focused on classifying lesions of the endometrium, the layer of tissue lining the uterus . In the first article, using 245 women as the training cohort, the authors compared nine models – logistic regression (LR), SVM, stochastic gradient descent, kNN, DT, RF, ExtraTrees, XGBoost, and LightGBM – to obtain an optimal algorithm for differential diagnosis (malignant versus benign tumors) . A radiomics score (radscore) was computed for the best-performing algorithm (logistic regression), and four models were selected using different combinations of T1-weighted, T2-weighted, and DWI MRI features: (i) the radiomics model; (ii) a nomogram, combining the radscore and clinical predictive parameters; (iii) a two-tiered stacking model, where the first tier was the clinical model and the optimal radiomics model (LR), and the second tier used the output of the first tier as the input of the multivariate LR; and (iv) an ensemble model, where the predictions obtained from the preceding clinical model and radiomics model were calculated by an accuracy-weighted average. The results showed that all four models accurately differentiated stage IA endometrial cancer and benign endometrial lesions. Furthermore, during external validation (44 MRIs from 44 women), the authors found that the nomogram had a higher AUC than the radiomics model, revealing more stable discrimination efficiency and better generalizability than the stacking and ensemble models and a radiologist with 30 years of experience (except in sensitivity). Nevertheless, data was collected from two same-country centers (Chinese institutions), only standard radiomics features were extracted, and lesions were manually segmented, which is highly time-consuming. The second paper encompassed a global-to-local multi-scale CNN to diagnose endometrial hyperplasia and screen endometrial intraepithelial neoplasia (EIN) in histopathological images . The researchers trained the CNN using a large annotated dataset (6 248 images) and tested it on a temporally different set of patients (1631 images, 135 specimens, 102 women). They found that it performed well in diagnosing endometrial hyperplasia and detecting EIN, outperforming a junior pathologist (2 years of experience) and obtaining comparable performance to a mid-level and a senior pathologist (6 and 25 years of experience, respectively). The authors used Grad-CAM to emphasize the regions the model deemed relevant for diagnosis. However, this retrospective study only used histopathological images (as opposed to WSIs). Besides, it focused solely on classifying healthy slides, hyperplasia without atypia, and endometrial intraepithelial neoplasia, thus neglecting the differentiation between benign lesions and endometrial cancer. As illustrated in Table , five papers studied cancers of the integumentary system, focusing on the breasts and skin. Breast Cancer Three studies developed models for cancers originating in the breasts, each with a specific purpose and using different clinical modalities. In , several text-based machine learning classifiers, namely, DTs, RFs, MLPs, logistic regression, naïve Bayes, and XGBoost, were compared to establish optimal classifiers for osteoporosis, relative fracture, and 8-year overall survival predictions. The algorithm was trained on 420 patients from a Chinese institution and geographically validated on 150 women from a separate local institution. The osteoporosis model was compared against OSTA and FRAX, the fracture model against FRAX, and the prognostic model against TNM-8. The results showed that the XGBoost classifier performed the best for the three tasks and outperformed the other clinical models. Additionally, for explainability, the authors also used SHAP for feature importance analysis for each model: (i) age, use of anti-estrogens, and molecular type are the most predictive of osteoporosis; (ii) osteoporosis, age, and bone-specific alkaline phosphatase are the best predictors for fracture; and (iii) N-stage, molecular type, and age have the highest prognostic value for overall survival. Despite its positive results, prospective studies are needed to validate the model in more diverse patient populations. In , authors explored how combining AI and radiologists can improve breast cancer screening. Using 213 694 retrospectively collected mammograms (X-ray images) from 92 585 women, it was found that the combination of radiologists and AI (CNN-based classifier) achieved the highest accuracy in detecting breast cancer. The sensitivity and specificity of the standalone AI system were significantly lower than an unaided radiologist. However, the decision-referral approach outperformed the unaided radiologist on both sensitivity and specificity for several tested thresholds. Nonetheless, the study only included mammogram images and did not consider other factors, such as patient history or clinical data, which may impact the accuracy of breast cancer screening. Furthermore, the AI algorithm used in the study was not optimized for clinical use and may require further development and testing before it can be implemented in a clinical setting. Lastly, the work developed in entailed diagnosing non-cystic benign and malignant breast lesions from ultrasonographic images. Radiomic features were extracted from the ultrasound images, and a random forest model was trained with 135 lesions and externally validated to predict malignancy for each lesion. Moreover, the performance of an experienced radiologist (8 years) was compared with and without the model’s assistance. Although not with statistical significance, the radiologist's assessments improved when using the AI system. However, the final validation population was small (66 ultrasounds from 57 women) and showed different proportions of malignant lesions. Skin Cancer Two models were developed to diagnose skin tumors using photographs, producing an average AUC, sensitivity, and specificity of 0.89, 77.1%, and 81.74% . The first was a retrospective validation study assessing the performance of deep neural networks in detecting and diagnosing benign and malignant skin neoplasms of the head and neck, trunk, arms, and legs . In a previous study, the authors trained an ensemble of CNNs (SENet + SE-ResNeXt-50 + faster RCNN) with 1 106 886 image crops from South Korean patients to detect potential lesions and classify skin malignancies. Here, performance was tested on three new temporal and geographical validation datasets of skin lesions (two national, one international, 46 696 photographs from 10 876 patients): (i) one dataset was used to compare the model’s classification performance against 65 attending physicians in real-world practice; (ii) one’s goal was to evaluate classification performance against with 44 dermatologists in an experimental setting; and (iv) the last two were meant to predict exact diagnosis (1 of 43 primary skin neoplasms) in a local (South Korean) and an international (UK, 1 300 images) dataset, with the first also being compared against physicians. In (i) and (ii), performance was calculated for high specificity and high sensitivity thresholds. The algorithm was more sensitive and specific than the dermatologists in the experimental setting. However, attending physicians outperformed it in real-world practice in all tested metrics (sensitivity, specificity, PPV, and NPV). In addition, the model only dealt with high-quality clinical photographs, and there was a lack of ethnic diversity in the study population. The second paper presented a set of CNNs – DenseNet-121 (Faster R-CNN and deep classification network) – developed to detect malignant eyelid tumors from photographic images . The researchers used a 1 417 clinical images dataset with 1 533 eyelid tumors from 851 patients across three Chinese institutions (one for development and two for external validation). Besides using Grad-CAM for interpretation, the AI’s performance on the external dataset (266 pictures from 176 patients) was compared to three ophthalmologists: one junior, one senior, and one expert (3, 7, and 15 years of experience, respectively). It surpassed the junior and senior ophthalmologists’ performance and achieved similar results to the expert. Notwithstanding its potential, the system still needs evaluation on non-Asian populations and prospectively acquired datasets, and it was only developed for detection (it cannot provide a specific diagnosis). Three studies developed models for cancers originating in the breasts, each with a specific purpose and using different clinical modalities. In , several text-based machine learning classifiers, namely, DTs, RFs, MLPs, logistic regression, naïve Bayes, and XGBoost, were compared to establish optimal classifiers for osteoporosis, relative fracture, and 8-year overall survival predictions. The algorithm was trained on 420 patients from a Chinese institution and geographically validated on 150 women from a separate local institution. The osteoporosis model was compared against OSTA and FRAX, the fracture model against FRAX, and the prognostic model against TNM-8. The results showed that the XGBoost classifier performed the best for the three tasks and outperformed the other clinical models. Additionally, for explainability, the authors also used SHAP for feature importance analysis for each model: (i) age, use of anti-estrogens, and molecular type are the most predictive of osteoporosis; (ii) osteoporosis, age, and bone-specific alkaline phosphatase are the best predictors for fracture; and (iii) N-stage, molecular type, and age have the highest prognostic value for overall survival. Despite its positive results, prospective studies are needed to validate the model in more diverse patient populations. In , authors explored how combining AI and radiologists can improve breast cancer screening. Using 213 694 retrospectively collected mammograms (X-ray images) from 92 585 women, it was found that the combination of radiologists and AI (CNN-based classifier) achieved the highest accuracy in detecting breast cancer. The sensitivity and specificity of the standalone AI system were significantly lower than an unaided radiologist. However, the decision-referral approach outperformed the unaided radiologist on both sensitivity and specificity for several tested thresholds. Nonetheless, the study only included mammogram images and did not consider other factors, such as patient history or clinical data, which may impact the accuracy of breast cancer screening. Furthermore, the AI algorithm used in the study was not optimized for clinical use and may require further development and testing before it can be implemented in a clinical setting. Lastly, the work developed in entailed diagnosing non-cystic benign and malignant breast lesions from ultrasonographic images. Radiomic features were extracted from the ultrasound images, and a random forest model was trained with 135 lesions and externally validated to predict malignancy for each lesion. Moreover, the performance of an experienced radiologist (8 years) was compared with and without the model’s assistance. Although not with statistical significance, the radiologist's assessments improved when using the AI system. However, the final validation population was small (66 ultrasounds from 57 women) and showed different proportions of malignant lesions. Two models were developed to diagnose skin tumors using photographs, producing an average AUC, sensitivity, and specificity of 0.89, 77.1%, and 81.74% . The first was a retrospective validation study assessing the performance of deep neural networks in detecting and diagnosing benign and malignant skin neoplasms of the head and neck, trunk, arms, and legs . In a previous study, the authors trained an ensemble of CNNs (SENet + SE-ResNeXt-50 + faster RCNN) with 1 106 886 image crops from South Korean patients to detect potential lesions and classify skin malignancies. Here, performance was tested on three new temporal and geographical validation datasets of skin lesions (two national, one international, 46 696 photographs from 10 876 patients): (i) one dataset was used to compare the model’s classification performance against 65 attending physicians in real-world practice; (ii) one’s goal was to evaluate classification performance against with 44 dermatologists in an experimental setting; and (iv) the last two were meant to predict exact diagnosis (1 of 43 primary skin neoplasms) in a local (South Korean) and an international (UK, 1 300 images) dataset, with the first also being compared against physicians. In (i) and (ii), performance was calculated for high specificity and high sensitivity thresholds. The algorithm was more sensitive and specific than the dermatologists in the experimental setting. However, attending physicians outperformed it in real-world practice in all tested metrics (sensitivity, specificity, PPV, and NPV). In addition, the model only dealt with high-quality clinical photographs, and there was a lack of ethnic diversity in the study population. The second paper presented a set of CNNs – DenseNet-121 (Faster R-CNN and deep classification network) – developed to detect malignant eyelid tumors from photographic images . The researchers used a 1 417 clinical images dataset with 1 533 eyelid tumors from 851 patients across three Chinese institutions (one for development and two for external validation). Besides using Grad-CAM for interpretation, the AI’s performance on the external dataset (266 pictures from 176 patients) was compared to three ophthalmologists: one junior, one senior, and one expert (3, 7, and 15 years of experience, respectively). It surpassed the junior and senior ophthalmologists’ performance and achieved similar results to the expert. Notwithstanding its potential, the system still needs evaluation on non-Asian populations and prospectively acquired datasets, and it was only developed for detection (it cannot provide a specific diagnosis). Thirteen papers addressed respiratory system cancers, which predominantly concerned the lungs, but also included the larynx, nasopharynx, and mesothelium (Table ). Lung Cancer Ten approaches were developed for lung cancer assessments. The first document describes a validation study of a CNN-based tool (DenseNet) designed to predict the malignancy of pulmonary nodules . The model was previously trained with the NLST dataset and was now externally validated in 3 UK centers with different CT scanners (1 397 CECTs and NECTs, 1 187 patients of unknown gender ratio). The authors also evaluated its clinical utility by comparing it to the Brock Model. Although slightly less specific than the Brock model, the detection algorithm developed by the authors had a higher AUC and sensitivity. Despite having undergone international validation, prospective studies in ethnically diverse populations are still amiss. The second paper involved developing and validating a model to predict the malignancy of multiple pulmonary nodules from CT scans and eleven clinical variables . The study analyzed data from various medical centers. Ten ML methods were compared to identify the best malignancy predictor: AdaBoost, DT, Logistic Regression, Linear SVM, Radial Basis Function Kernel SVM, NB, kNN, Neural Net, Quadratic Discriminant Analysis, RF, and XGBoost. The best-performing model – XGBoost – was tested on three datasets. The first was retrospective, compiled from 6 institutions (five from China and one from South Korea), used for primary external validation (220 patients, 583 CT scans), and compared against four well-established models: Brock, Mayo, PKU, and VA. The second retrospective dataset was used for generalizability, containing patients from a Chinese institution with solitary pulmonary nodules (195 patients and images, 110 women, 85 men), whose results were also compared against the four just-mentioned models. The third and last dataset included data from 4 Chinese centers and was collected prospectively for secondary validation and comparisons against clinicians (200 CTs, 78 patients, 51 women, 27 men). This comparison involved three thoracic surgeons and one radiologist, who achieved an average sensitivity of 0.651 and specificity of 0.679. The model significantly outperformed this average and each clinician’s AUC, as well as in all comparisons against the routinely used models. In addition, SHAP was used to identify the most predictive nodule characteristics, finding that the model's most predictive features were nodule size, type, count, border, patient age, spiculation, lobulation, emphysema, nodule location, and distribution. Nonetheless, besides not reporting individual clinician sensitivity and specificity in the prospective cohort, the drawbacks of this study include only assessing typical high-risk patients and the lack of validation with different ethnicities. The work in involved a CNN-based model for predicting the presence of visceral pleural invasion in patients with early-stage lung cancer. The deep learning model was trained using a dataset of CT scans from 676 patients and externally validated on a temporally different cohort from the same South Korean institution (141 CTs from 84 women and 57 men). Besides using Grad-CAM to evidence its decisions, this CNN can adapt its sensitivity and specificity to meet the clinical needs of individual patients and clinicians. The model achieved a performance level comparable to three expert radiologists but did not surpass it except in PPV. Besides, these are results from a monoinstitutional retrospective study where geographical validation was not performed. In addition to using a small number of patients, data was also imbalanced, and the model was not fully automated (required manual tumor annotations). The fourth article concerns developing an EfficientNetV2-based CNN system to predict the survival benefit of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in patients with stage IV non-small cell lung cancer . The model was developed with accessible pre-therapy CT images from five centers and externally validated on a monoinstitutional dataset from a national dataset (China, 92 CTs from 92 patients). The authors also compared radiologists' and oncologists' (three each, 2, 5, and 10 years of experience) performance with and without ESBP. The results showed that, while assisted by the system, all radiologists improved their diagnostic accuracy, sensibility, specificity, PPV, and NPV (except for the trainee oncologist, who achieved better sensitivity without the model). However, prospective studies in ethnically rich cohorts are still necessary to implement this tool in clinical practice. The fifth study aimed at finding optimal predictors of two-year recurrence, recurrence-free survival, and overall survival after curative-intent radiotherapy for non-small cell lung cancer . Ten text-based ML models were trained on 498 patients and compared: ANN, Linear and Non-linear SVM, Generalized Linear Model, kNN, RF, MDA, Partial Least Squares, NB, and XGBoost. The best-performing models were as follows: (i) an ensemble of kNN, NB, and RF for recurrence classification; (ii) kNN for recurrence-free survival prediction; and (iii) a combination of XGBoost, ANN, and MDA for overall survival. The three optimal predictors were externally validated using routinely collected data from 5 UK institutions (159 seniors, 71 women, 88 men) and compared against TNM-8 and WHO performance status. The recurrence and overall survival models outperformed both routinely used systems, but these tools surpassed the recurrence-free survival predictor’s performance. Moreover, this study was retrospective and had a small sample size with missing data. The sixth study was designed to identify high-risk smokers to predict long-term lung cancer incidence (12 years) . In this paper, the authors developed a convolutional neural inception V4 network based on low-dose chest CT images, age, sex, and current versus former smoking statuses. The CNN was trained using patients from the PLCO trial and externally validated on data from the NLST randomized controlled trial (2456 women and 3037 men from 33 USA institutions). The model was also compared against PLCOm2012 to evaluate clinical utility, having exceeded its performance for all assessed metrics (AUC, sensitivity, specificity, PPV, and NPV). However, this study was retrospective, lacked ethnic diversity, and was not evaluated in a clinically realistic scenario. Additionally, information from symptomatic patients was unavailable due to using data from a screening trial. In the seventh article, a CNN-based model was developed for the automated detection and diagnosis of malignant pulmonary nodules on CECT scans . The algorithm was externally validated on four separate datasets with ethnic differences (three from South Korea and one from the USA, amounting to 693 patients and CTs). Furthermore, the diagnostic performance of 18 physicians (from non-radiologists to radiologists with 26 years of experience) was compared while assisted and not assisted by the algorithm for one dataset. The model achieved an excellent performance in the four tested datasets, outperforming all clinicians, and the professionals’ accuracy increased while aided by the model for all tested groups. Nonetheless, the model was undertrained for small nodules (< 1 cm) and trained only for malignant nodule detection for one type of CT (posterior-anterior projections), and the study was retrospective and not representative of a real-world clinical setting. The eighth algorithm consisted of a multilayer perceptron (Feed-Forward Neural Network) paired with a Cox proportional hazards model to predict cancer-specific survival for non-small cell lung cancer . The text-based model was trained using the SEER database and externally validated on patients from a Chinese tertiary pulmonary hospital (642 women, 540 men). It was compared against TNM-8, having outperformed it with statistical significance. Although tested with real-world clinical data, prospective multi-institutional studies are needed before the deep learning model can be used in clinical practice. The ninth article described developing, validating, and comparing three CNN models to differentiate between benign and malignant pulmonary ground-glass nodules (GGNs) . The first CNN only used CT images. The second CNN used clinical data: age, sex, and smoking history. The third was a fusion model combining CTs and clinical features, achieving the best performance. This model was temporally and geographically validated with 63 CT scans from 61 patients (39 women, 22 men). Its classification performance was compared against two radiologists (5 and 10 years of experience) for clinical utility assessment. Despite performing satisfactorily in external validation, the model was surpassed by both clinicians in accuracy, sensitivity, and NPV, only producing higher results for specificity and NPV. Furthermore, this study was retrospective, and validation was neither international nor evaluated in a correct clinical setting. In the tenth and final paper, a Neural Multitask Logistic Regression (N-MTLR) network was developed for survival risk stratification for stage III non-small cell lung cancer . The text-based deep learning system was trained on 16 613 patients from the SEER database and externally validated on subjects from a Chinese institution (172 patients, 39 women, 133 men). The results in the external dataset showed that the DSNN could predict survival outcomes more accurately than TNM-8 (AUC of 0.7439 vs. 0.561). The study results suggest that the deep learning system could be used for personalized treatment planning and stratification for patients with stage III non-small cell lung cancer. However, prospective studies in multi-institutional datasets are still required. Laryngeal, Mesothelial and Nasopharyngeal Cancers Three models were developed to assess tumors of other elements of the respiratory system. In , the authors trained a CNN (GoogLeNet Inception v3 network) with 13 721 raw endoscopic laryngeal images – including laryngeal cancer (LCA), precancerous laryngeal lesions (PRELCA), benign laryngeal tumors (BLT), and healthy tissue – from three Chinese institutions (1 816 patients). External validation was performed on 1 176 white-light endoscopic images from two additional institutions in the same country (392 patients), testing the model for binary classification – urgent (LCA and PRELCA) or non-urgent (BLT and healthy) – and between the four conditions. Predictions for both classification types were compared against three endoscopists (3, 3 to 10, and 10 to 20 years of experience). In two-way classification, the algorithm was less accurate than one endoscopist and less sensitive than two but outperformed all clinicians in four-way diagnostic accuracy. Still, this accuracy was relatively low (less than 80%), the study was retrospective, and all tested laryngoscopic images were obtained by the same type of standard endoscopes. Cancers of the mesothelium were approached in a single retrospective multi-center study . The paper uses DL to distinguish between two types of mesothelial cell proliferations: sarcomatoid malignant mesotheliomas (SMM) and benign spindle cell mesothelial proliferations (BSCMP). SMMs and BSCMPs are difficult to distinguish using traditional histopathological methods, resulting in misdiagnoses. The authors propose a new strategy—SpindleMesoNET—that uses an ensemble of a CNN and an RNN to analyze WSIs of H&E-stained mesothelial slides magnified 40 times. The model was trained on a Canadian dataset, externally validated on 39 images from 39 patients from a Chinese center, and compared against the diagnostic performance of three pathologists on a referral test set (40 WSIs from 40 patients). The accuracy and specificity of SpindleMesoNET on the referral set cases (92.5% and 100%, respectively) exceeded that of the three pathologists on the same slide set (91.7% and 96.5%). However, the pathologists were more sensitive than the diagnostic model (87.3% vs. 85.3%). In addition, the study had a minimal sample size, and only AUC was reported for the external validation dataset (0.989), which, although considerably high, is insufficient to assess the model’s effectiveness. The last study entailed developing and validating a CNN-based model to differentiate malignant carcinoma from benign nasopharyngeal lesions using white-light endoscopic images . Malignant conditions included lymphoma, rhabdomyosarcoma, olfactory neuroblastoma, malignant melanoma, and plasmacytoma. Benign subtypes encompassed precancerous or atypical hyperplasia, fibroangioma, leiomyoma, meningioma, minor salivary gland tumor, fungal infection, tuberculosis, chronic inflammation, adenoids or lymphoid hyperplasia, nasopharyngeal cyst, and foreign body. The model was trained on 27 536 images collected retrospectively (7 951 subjects) and temporally (prospectively) externally validated with 1 430 images (from 355 patients) from the same Chinese institution. Diagnostic performance was compared against 14 endoscopists: (i) three experts with more than five years of experience; (ii) eight residents with one year of experience; and (iii) interns with less than three months of experience. Except for the interns’ sensitivity, the model’s diagnostic performance surpassed the endoscopists in all tested metrics. However, data were collected from a single tertiary institution, and more malignancies should be included. Although not developed for the same cancer type, the two cancer detection studies for the larynx and nasopharynx are comparable due to using white-light endoscopic images. Both used CNNs and involved more than 300 patients and 1000 images, but the optimal diagnostic performance – although less sensitive (72% vs. 90.2% in ) – was achieved for the GoogLeNet Inception v3 network CNN with an AUC of 0.953, an accuracy of 89.7%, and a specificity of 94.8%, enhancing the value of pre-training CNNs. Ten approaches were developed for lung cancer assessments. The first document describes a validation study of a CNN-based tool (DenseNet) designed to predict the malignancy of pulmonary nodules . The model was previously trained with the NLST dataset and was now externally validated in 3 UK centers with different CT scanners (1 397 CECTs and NECTs, 1 187 patients of unknown gender ratio). The authors also evaluated its clinical utility by comparing it to the Brock Model. Although slightly less specific than the Brock model, the detection algorithm developed by the authors had a higher AUC and sensitivity. Despite having undergone international validation, prospective studies in ethnically diverse populations are still amiss. The second paper involved developing and validating a model to predict the malignancy of multiple pulmonary nodules from CT scans and eleven clinical variables . The study analyzed data from various medical centers. Ten ML methods were compared to identify the best malignancy predictor: AdaBoost, DT, Logistic Regression, Linear SVM, Radial Basis Function Kernel SVM, NB, kNN, Neural Net, Quadratic Discriminant Analysis, RF, and XGBoost. The best-performing model – XGBoost – was tested on three datasets. The first was retrospective, compiled from 6 institutions (five from China and one from South Korea), used for primary external validation (220 patients, 583 CT scans), and compared against four well-established models: Brock, Mayo, PKU, and VA. The second retrospective dataset was used for generalizability, containing patients from a Chinese institution with solitary pulmonary nodules (195 patients and images, 110 women, 85 men), whose results were also compared against the four just-mentioned models. The third and last dataset included data from 4 Chinese centers and was collected prospectively for secondary validation and comparisons against clinicians (200 CTs, 78 patients, 51 women, 27 men). This comparison involved three thoracic surgeons and one radiologist, who achieved an average sensitivity of 0.651 and specificity of 0.679. The model significantly outperformed this average and each clinician’s AUC, as well as in all comparisons against the routinely used models. In addition, SHAP was used to identify the most predictive nodule characteristics, finding that the model's most predictive features were nodule size, type, count, border, patient age, spiculation, lobulation, emphysema, nodule location, and distribution. Nonetheless, besides not reporting individual clinician sensitivity and specificity in the prospective cohort, the drawbacks of this study include only assessing typical high-risk patients and the lack of validation with different ethnicities. The work in involved a CNN-based model for predicting the presence of visceral pleural invasion in patients with early-stage lung cancer. The deep learning model was trained using a dataset of CT scans from 676 patients and externally validated on a temporally different cohort from the same South Korean institution (141 CTs from 84 women and 57 men). Besides using Grad-CAM to evidence its decisions, this CNN can adapt its sensitivity and specificity to meet the clinical needs of individual patients and clinicians. The model achieved a performance level comparable to three expert radiologists but did not surpass it except in PPV. Besides, these are results from a monoinstitutional retrospective study where geographical validation was not performed. In addition to using a small number of patients, data was also imbalanced, and the model was not fully automated (required manual tumor annotations). The fourth article concerns developing an EfficientNetV2-based CNN system to predict the survival benefit of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in patients with stage IV non-small cell lung cancer . The model was developed with accessible pre-therapy CT images from five centers and externally validated on a monoinstitutional dataset from a national dataset (China, 92 CTs from 92 patients). The authors also compared radiologists' and oncologists' (three each, 2, 5, and 10 years of experience) performance with and without ESBP. The results showed that, while assisted by the system, all radiologists improved their diagnostic accuracy, sensibility, specificity, PPV, and NPV (except for the trainee oncologist, who achieved better sensitivity without the model). However, prospective studies in ethnically rich cohorts are still necessary to implement this tool in clinical practice. The fifth study aimed at finding optimal predictors of two-year recurrence, recurrence-free survival, and overall survival after curative-intent radiotherapy for non-small cell lung cancer . Ten text-based ML models were trained on 498 patients and compared: ANN, Linear and Non-linear SVM, Generalized Linear Model, kNN, RF, MDA, Partial Least Squares, NB, and XGBoost. The best-performing models were as follows: (i) an ensemble of kNN, NB, and RF for recurrence classification; (ii) kNN for recurrence-free survival prediction; and (iii) a combination of XGBoost, ANN, and MDA for overall survival. The three optimal predictors were externally validated using routinely collected data from 5 UK institutions (159 seniors, 71 women, 88 men) and compared against TNM-8 and WHO performance status. The recurrence and overall survival models outperformed both routinely used systems, but these tools surpassed the recurrence-free survival predictor’s performance. Moreover, this study was retrospective and had a small sample size with missing data. The sixth study was designed to identify high-risk smokers to predict long-term lung cancer incidence (12 years) . In this paper, the authors developed a convolutional neural inception V4 network based on low-dose chest CT images, age, sex, and current versus former smoking statuses. The CNN was trained using patients from the PLCO trial and externally validated on data from the NLST randomized controlled trial (2456 women and 3037 men from 33 USA institutions). The model was also compared against PLCOm2012 to evaluate clinical utility, having exceeded its performance for all assessed metrics (AUC, sensitivity, specificity, PPV, and NPV). However, this study was retrospective, lacked ethnic diversity, and was not evaluated in a clinically realistic scenario. Additionally, information from symptomatic patients was unavailable due to using data from a screening trial. In the seventh article, a CNN-based model was developed for the automated detection and diagnosis of malignant pulmonary nodules on CECT scans . The algorithm was externally validated on four separate datasets with ethnic differences (three from South Korea and one from the USA, amounting to 693 patients and CTs). Furthermore, the diagnostic performance of 18 physicians (from non-radiologists to radiologists with 26 years of experience) was compared while assisted and not assisted by the algorithm for one dataset. The model achieved an excellent performance in the four tested datasets, outperforming all clinicians, and the professionals’ accuracy increased while aided by the model for all tested groups. Nonetheless, the model was undertrained for small nodules (< 1 cm) and trained only for malignant nodule detection for one type of CT (posterior-anterior projections), and the study was retrospective and not representative of a real-world clinical setting. The eighth algorithm consisted of a multilayer perceptron (Feed-Forward Neural Network) paired with a Cox proportional hazards model to predict cancer-specific survival for non-small cell lung cancer . The text-based model was trained using the SEER database and externally validated on patients from a Chinese tertiary pulmonary hospital (642 women, 540 men). It was compared against TNM-8, having outperformed it with statistical significance. Although tested with real-world clinical data, prospective multi-institutional studies are needed before the deep learning model can be used in clinical practice. The ninth article described developing, validating, and comparing three CNN models to differentiate between benign and malignant pulmonary ground-glass nodules (GGNs) . The first CNN only used CT images. The second CNN used clinical data: age, sex, and smoking history. The third was a fusion model combining CTs and clinical features, achieving the best performance. This model was temporally and geographically validated with 63 CT scans from 61 patients (39 women, 22 men). Its classification performance was compared against two radiologists (5 and 10 years of experience) for clinical utility assessment. Despite performing satisfactorily in external validation, the model was surpassed by both clinicians in accuracy, sensitivity, and NPV, only producing higher results for specificity and NPV. Furthermore, this study was retrospective, and validation was neither international nor evaluated in a correct clinical setting. In the tenth and final paper, a Neural Multitask Logistic Regression (N-MTLR) network was developed for survival risk stratification for stage III non-small cell lung cancer . The text-based deep learning system was trained on 16 613 patients from the SEER database and externally validated on subjects from a Chinese institution (172 patients, 39 women, 133 men). The results in the external dataset showed that the DSNN could predict survival outcomes more accurately than TNM-8 (AUC of 0.7439 vs. 0.561). The study results suggest that the deep learning system could be used for personalized treatment planning and stratification for patients with stage III non-small cell lung cancer. However, prospective studies in multi-institutional datasets are still required. Three models were developed to assess tumors of other elements of the respiratory system. In , the authors trained a CNN (GoogLeNet Inception v3 network) with 13 721 raw endoscopic laryngeal images – including laryngeal cancer (LCA), precancerous laryngeal lesions (PRELCA), benign laryngeal tumors (BLT), and healthy tissue – from three Chinese institutions (1 816 patients). External validation was performed on 1 176 white-light endoscopic images from two additional institutions in the same country (392 patients), testing the model for binary classification – urgent (LCA and PRELCA) or non-urgent (BLT and healthy) – and between the four conditions. Predictions for both classification types were compared against three endoscopists (3, 3 to 10, and 10 to 20 years of experience). In two-way classification, the algorithm was less accurate than one endoscopist and less sensitive than two but outperformed all clinicians in four-way diagnostic accuracy. Still, this accuracy was relatively low (less than 80%), the study was retrospective, and all tested laryngoscopic images were obtained by the same type of standard endoscopes. Cancers of the mesothelium were approached in a single retrospective multi-center study . The paper uses DL to distinguish between two types of mesothelial cell proliferations: sarcomatoid malignant mesotheliomas (SMM) and benign spindle cell mesothelial proliferations (BSCMP). SMMs and BSCMPs are difficult to distinguish using traditional histopathological methods, resulting in misdiagnoses. The authors propose a new strategy—SpindleMesoNET—that uses an ensemble of a CNN and an RNN to analyze WSIs of H&E-stained mesothelial slides magnified 40 times. The model was trained on a Canadian dataset, externally validated on 39 images from 39 patients from a Chinese center, and compared against the diagnostic performance of three pathologists on a referral test set (40 WSIs from 40 patients). The accuracy and specificity of SpindleMesoNET on the referral set cases (92.5% and 100%, respectively) exceeded that of the three pathologists on the same slide set (91.7% and 96.5%). However, the pathologists were more sensitive than the diagnostic model (87.3% vs. 85.3%). In addition, the study had a minimal sample size, and only AUC was reported for the external validation dataset (0.989), which, although considerably high, is insufficient to assess the model’s effectiveness. The last study entailed developing and validating a CNN-based model to differentiate malignant carcinoma from benign nasopharyngeal lesions using white-light endoscopic images . Malignant conditions included lymphoma, rhabdomyosarcoma, olfactory neuroblastoma, malignant melanoma, and plasmacytoma. Benign subtypes encompassed precancerous or atypical hyperplasia, fibroangioma, leiomyoma, meningioma, minor salivary gland tumor, fungal infection, tuberculosis, chronic inflammation, adenoids or lymphoid hyperplasia, nasopharyngeal cyst, and foreign body. The model was trained on 27 536 images collected retrospectively (7 951 subjects) and temporally (prospectively) externally validated with 1 430 images (from 355 patients) from the same Chinese institution. Diagnostic performance was compared against 14 endoscopists: (i) three experts with more than five years of experience; (ii) eight residents with one year of experience; and (iii) interns with less than three months of experience. Except for the interns’ sensitivity, the model’s diagnostic performance surpassed the endoscopists in all tested metrics. However, data were collected from a single tertiary institution, and more malignancies should be included. Although not developed for the same cancer type, the two cancer detection studies for the larynx and nasopharynx are comparable due to using white-light endoscopic images. Both used CNNs and involved more than 300 patients and 1000 images, but the optimal diagnostic performance – although less sensitive (72% vs. 90.2% in ) – was achieved for the GoogLeNet Inception v3 network CNN with an AUC of 0.953, an accuracy of 89.7%, and a specificity of 94.8%, enhancing the value of pre-training CNNs. Four studies using different imaging techniques were designed to diagnose bone cancers, producing an average AUC of 0.88 (Table ). The first two radiomics-based models were developed for the binary classification of atypical cartilaginous tumors (ACT) and appendicular chondrosarcomas (CS) . In , a LogitBoost algorithm was temporally and geographically validated on 36 PET-CT scans from 23 women and 13 men. Besides externally validating their method, the authors evaluated clinical utility by comparing its diagnostic performance against a radiologist. The model performed satisfactorily in all calculated metrics (AUC, accuracy, sensitivity, PPV, and F1-score), but its accuracy was lower than the radiologist. In addition, only non-contrast PET-CT scans were included in the analyses. In the following year, research performed by the same first author evaluated bone tumor diagnosis from MRI scans . Radiomic features were extracted from T1-weighted MRI scans, and an ExtraTrees algorithm was trained to classify the tumors. On an external validation dataset of 65 images (34 women, 31 men), the model achieved a PPV, sensitivity, and F1-score of 92%, 98%, and 0.95 in classifying ACTs, while 94%, 80%, and 86% for the classification of grade II CS of long bones, respectively (weighted average is presented in Table ). The model's classification performance was compared against an experienced radiologist (with 35 years of experience) to assess clinical utility, finding that it could not match the radiologist's performance. Using SHAP, it was also found that certain radiomic features, such as the mean and standard deviation of gradient magnitude and entropy, significantly differed between the two tumor types. Drawbacks include the study’s retrospective nature, using only one type of MRI, and over-representing appendicular chondrosarcomas compared to cartilaginous tumors in the study population. The second set of papers used neural networks to differentiate benign from malignant bone tumors from X-ray images . On the one hand, in , a CNN (EfficientNet-B0) was developed on a dataset of 2899 radiographic images from 1356 patients with primary bone tumors from 5 institutions (3 for training, 2 for validation), including benign (1523 images, 679 patients), intermediate (635 images, 317 patients), and malignant (741 images, 360 patients) growths. The CNN model was developed for binary (benign versus not benign and malignant versus not malignant) and three-way (benign versus intermediate versus malignant) tumor classification. The authors also compared the model’s triple-way classification performance against two musculoskeletal subspecialists with 25 and 23 years of experience and three junior radiologists with 6, 1, and 7 years of experience. The deep learning algorithm had similar accuracy to the subspecialists and better performance than junior radiologists. However, only a modest number of patients was used for validation (639 X-rays from 291 patients), tumor classes were unbalanced (smaller number of benign bone tumors compared to intermediate and malignant), and the pipeline was not fully automated. In contrast, other authors resorted to a non-deep ANN that uses radiomic features extracted from X-ray images and demographic data to classify and differentiate malignant and benign bone tumors . The ANN was developed on 880 patients with the following conditions: (i) malignant tumors: chondrosarcoma, osteosarcoma, Ewing’s sarcoma, plasma cell myeloma, non-Hodgkin lymphoma B cell, and chordoma; (ii) benign subtypes: osteochondroma, enchondroma, chondroblastoma, osteoid osteoma, giant cell tumor, non-ossifying fibroma, haemangioma, aneurysmal bone cyst, simple bone cyst, fibrous dysplasia. The method was externally validated on 96 patients from a different institution, and performance was compared against four radiologists (two residents and two specialized). The model was more sensitive than both radiologist groups but was outperformed by the specialized radiologists in accuracy and specificity. In addition, the model requires manual segmentations and can only distinguish between benign and malignant tumors and not specific subtypes. As shown in Table , five studies entailed the assessment of metastatic cancer, that is, secondary tumors spread from different tissues. From these, three focused on cancer spread to organs , while two evaluated metastasized nodes. Organ metastases In , models were created to predict the risk of bone metastasis and prognosis (three-year overall survival) for kidney cancer patients. To achieve optimal performance, the researchers developed and compared eight ML models: DTs, RFs, MLPs, Logistic Regression, Naïve Bayes BS classifier, XGBoost, SVMs, and kNN. The text-based models were trained with 71 414 patients from the SEER database (USA) and externally validated with 963 patients from a Chinese institution (323 women, 640 men). The results showed that their XGBoost-based models had the best accuracy in predicting bone metastasis risk and prognosis. The risk prediction model (diagnosis) outperformed TNM-7 only regarding AUC (0.98 vs. 0.93), while the prognostic model exceeded TNM-7’s predictions for all tested metrics (AUC, accuracy, sensitivity, PPV, and F1-score). Using SHAP analysis, the authors also unveiled that the key factors influencing these outcomes were age, sex, and tumor characteristics. Although trained on ethnically different patients, these models were only validated on Asian subjects and not compared against clinicians, so further studies are required to establish clinical validity and utility. The second paper explores the effectiveness of a deep learning-based algorithm (CNN) in detecting and classifying liver metastases from colorectal cancer using CT scans . In this South Korean monoinstitutional study, 502 patients were used for training, and temporally different patients (40 with 99 metastatic lesions, 45 without metastases) were used for validation. The algorithm's detection and classification performance was compared to three radiologists (with 2, 3, and 20 years of experience in liver imaging) and three second-year radiology residents. Although showing a higher diagnostic sensitivity than both types of clinicians, the six radiologists outperformed the model in AUAFROC (detection) and false positives per patient (FPP, classification). In addition, the CT scans had been captured eight years before the analyses. The third study was conducted in a clinically realistic scenario, and the model has been implemented in practice . The model was designed to predict 3-month mortality in patients with solid metastatic tumors for several types of cancer (breast, gastrointestinal, genitourinary, lung, rare) and treatment alterations in an outpatient setting. The authors trained a Gradient-Boosted Trees Binary Classifier with observations from 28 484 deceased and alive patients and 493 features from demographic characteristics, laboratory test results, flowsheets, and diagnoses. The model was silently deployed in the patients’ EHRs for 20 months to compare its predictions against 74 oncologists. This prospective temporal validation study involved 3099 encounters from 2041 ethnically diverse patients. The model outperformed oncologists in all metrics for aggregate (general, with and without treatment alterations), gastrointestinal, genitourinary, and lung cancers but was less sensitive than the professionals for rare and breast metastatic tumors. Although currently available in medical practice, the authors note that further research is needed to validate whether using the model improves prognostic confidence and patient engagement. Node metastases Two models were developed to diagnose node metastases. In , the authors aimed to classify cervical lymph node metastasis from thyroid cancer using CT scans . The researchers had previously developed a CNN (Xception architecture) trained on a 787 axial preoperative CT scans dataset. This study validated the systems' performance on 3 838 images from 698 patients (unknown female-male ratio) and used Grad-CAM to explain the model’s reasoning. The researchers also evaluated the clinical utility of the model by comparing seven radiologists’ performance (one expert, six trainees) with and without its assistance. While aided by the system, the expert’s accuracy, sensitivity, specificity, PPV, and NPV were all found to increase, while only accuracy, specificity, and NPV improved for the trainees. This study was retrospective and conducted in a single institution, and the results obtained were not satisfying enough to justify clinical implementation. The second and last document describes developing an ultrasound-based ML model to assess the risk of sentinel lymph node metastasis (SLNM) in breast cancer patients . First, the authors compared ten algorithms to achieve an optimal model: SVM, RF, LDA, Logistic Regression, NB, kNN, MLP, Long Short-Term Memory, and CNN. The best algorithm (XGBoost) was then integrated into a clinical model, and SHAP was used to analyze its diagnostic performance. XGBoost was trained with 902 patients, and external validation consisted of 50 temporally separate women. The authors also compared their tool with a radiologist’s diagnostic evaluations (unknown years of experience). The results showed that the ML model could predict the risk of SLNM in breast cancer patients based on ultrasound image features with high accuracy (84.6%), having outperformed the radiologist. In addition, SHAP analysis deemed suspicious lymph nodes, microcalcifications, spiculation at the edge of the lesion, and distorted tissue structure around the lesion as the model’s most significant features. Nonetheless, this research was retrospective and used a minimal number of patients from a single institution with limited pathological types of breast cancer. In , models were created to predict the risk of bone metastasis and prognosis (three-year overall survival) for kidney cancer patients. To achieve optimal performance, the researchers developed and compared eight ML models: DTs, RFs, MLPs, Logistic Regression, Naïve Bayes BS classifier, XGBoost, SVMs, and kNN. The text-based models were trained with 71 414 patients from the SEER database (USA) and externally validated with 963 patients from a Chinese institution (323 women, 640 men). The results showed that their XGBoost-based models had the best accuracy in predicting bone metastasis risk and prognosis. The risk prediction model (diagnosis) outperformed TNM-7 only regarding AUC (0.98 vs. 0.93), while the prognostic model exceeded TNM-7’s predictions for all tested metrics (AUC, accuracy, sensitivity, PPV, and F1-score). Using SHAP analysis, the authors also unveiled that the key factors influencing these outcomes were age, sex, and tumor characteristics. Although trained on ethnically different patients, these models were only validated on Asian subjects and not compared against clinicians, so further studies are required to establish clinical validity and utility. The second paper explores the effectiveness of a deep learning-based algorithm (CNN) in detecting and classifying liver metastases from colorectal cancer using CT scans . In this South Korean monoinstitutional study, 502 patients were used for training, and temporally different patients (40 with 99 metastatic lesions, 45 without metastases) were used for validation. The algorithm's detection and classification performance was compared to three radiologists (with 2, 3, and 20 years of experience in liver imaging) and three second-year radiology residents. Although showing a higher diagnostic sensitivity than both types of clinicians, the six radiologists outperformed the model in AUAFROC (detection) and false positives per patient (FPP, classification). In addition, the CT scans had been captured eight years before the analyses. The third study was conducted in a clinically realistic scenario, and the model has been implemented in practice . The model was designed to predict 3-month mortality in patients with solid metastatic tumors for several types of cancer (breast, gastrointestinal, genitourinary, lung, rare) and treatment alterations in an outpatient setting. The authors trained a Gradient-Boosted Trees Binary Classifier with observations from 28 484 deceased and alive patients and 493 features from demographic characteristics, laboratory test results, flowsheets, and diagnoses. The model was silently deployed in the patients’ EHRs for 20 months to compare its predictions against 74 oncologists. This prospective temporal validation study involved 3099 encounters from 2041 ethnically diverse patients. The model outperformed oncologists in all metrics for aggregate (general, with and without treatment alterations), gastrointestinal, genitourinary, and lung cancers but was less sensitive than the professionals for rare and breast metastatic tumors. Although currently available in medical practice, the authors note that further research is needed to validate whether using the model improves prognostic confidence and patient engagement. Two models were developed to diagnose node metastases. In , the authors aimed to classify cervical lymph node metastasis from thyroid cancer using CT scans . The researchers had previously developed a CNN (Xception architecture) trained on a 787 axial preoperative CT scans dataset. This study validated the systems' performance on 3 838 images from 698 patients (unknown female-male ratio) and used Grad-CAM to explain the model’s reasoning. The researchers also evaluated the clinical utility of the model by comparing seven radiologists’ performance (one expert, six trainees) with and without its assistance. While aided by the system, the expert’s accuracy, sensitivity, specificity, PPV, and NPV were all found to increase, while only accuracy, specificity, and NPV improved for the trainees. This study was retrospective and conducted in a single institution, and the results obtained were not satisfying enough to justify clinical implementation. The second and last document describes developing an ultrasound-based ML model to assess the risk of sentinel lymph node metastasis (SLNM) in breast cancer patients . First, the authors compared ten algorithms to achieve an optimal model: SVM, RF, LDA, Logistic Regression, NB, kNN, MLP, Long Short-Term Memory, and CNN. The best algorithm (XGBoost) was then integrated into a clinical model, and SHAP was used to analyze its diagnostic performance. XGBoost was trained with 902 patients, and external validation consisted of 50 temporally separate women. The authors also compared their tool with a radiologist’s diagnostic evaluations (unknown years of experience). The results showed that the ML model could predict the risk of SLNM in breast cancer patients based on ultrasound image features with high accuracy (84.6%), having outperformed the radiologist. In addition, SHAP analysis deemed suspicious lymph nodes, microcalcifications, spiculation at the edge of the lesion, and distorted tissue structure around the lesion as the model’s most significant features. Nonetheless, this research was retrospective and used a minimal number of patients from a single institution with limited pathological types of breast cancer. We conducted a scoping review to gather externally validated ML algorithms developed for patient care in oncology whose clinical utility has also been assessed. Given the rapidly evolving nature of the field and the potential for novel approaches and emerging research, and unlike previous reviews , a deliberate decision was made not to restrict the search strategy or outcomes stringently. The objective was to adopt a comprehensive and inclusive process to capture a diverse range of literature that could potentially contribute to our understanding of externally validated machine learning algorithms in the context of oncology practice. This approach allowed for exploring various cancer variants, clinical outcomes, validation methodologies, and clinical utility assessments without preconceived limitations that might have excluded relevant studies. Principal findings The findings from this scoping review reveal several critical insights into the landscape of ML and DL applications in cancer-patient-related decision-making. A notable prominent trend is their increasing recognition and interest. The dominance of papers focused on patients and medical issues (versus computational journals, Fig. A) highlights this growing enthusiasm and a strong emphasis on tackling clinical challenges and reflects a paradigmatic transition from theoretical and computational considerations toward practical, patient-oriented solutions. This is underscored by the significant rise in relevant sources after 2018, particularly in 2020, 2021, and 2022 (Fig. B). However, it's crucial to note that many papers were excluded due to insufficient external validation and clinical utility assessment (Fig. ), showing that the model development and testing methodology still lack standardization, which agrees with the literature . These observations collectively emphasize the evolution and maturation of the field, yet they also serve as a call to action for enhancing the methodological rigor of research endeavors. Concerning the first research question, we found that CNNs have risen to prominence and are now the backbone of most research initiatives (33/56 papers). Random Forests and XGBoost, while less common, still played significant roles, featuring in 7/56 and 6/56 of the studies, respectively, adding diversity to the oncology decision-making landscape. While lung cancer and digestive system assessments were the primary focus, these algorithms demonstrated versatile applicability across various cancer types. Moreover, the emphasis on image-based analyses reflects the potential of ML in augmenting the accuracy of diagnostic processes. However, the limited attention to risk stratification and pharmacotherapy research is a notable caveat. Likewise, the underutilization of radiomics in image studies indicates a missed opportunity. Incorporating radiomics can provide a wealth of information about tumor characteristics and heterogeneity, enriching our understanding and predictive capabilities in oncology. These are areas where ML can make significant contributions to the field, highlighting future directions for research and untapped potential for exploring alternative methodologies. Indeed, methodological considerations highlight several areas that demand attention. The simultaneous development and validation of models in most papers could potentially introduce partiality . Further, the limited sample sizes in many studies, with the majority involving fewer than 200 patients, raise concerns about the generalizability and robustness of these models . Equally, except for three prospective studies and four pieces of research encompassing both retrospective and prospective datasets, the selected papers were mainly retrospective (49/56), a less rigorous design potentially lowering data quality and compromising reliability . Nonetheless, in contrast to previous reviews , we witnessed a substantial increase in multi-institutional studies, marking a positive transformation in the landscape of oncological research. The shift towards collaborative efforts involving multiple centers brings diversity to the study populations, which is critical for generalizing findings to broader patient groups and instilling confidence in research outcomes. Collaborative research involving several institutions augments resources, expertise, and data access, offering a deeper understanding of research questions. However, the infrequent international validation and the paucity of data and code sharing in multi-institutional studies present substantial hurdles. These challenges obstruct the path to enhanced reproducibility and collaborative progress. Scientifically, they emphasize the importance of standardizing data-sharing practices and code accessibility to facilitate transparency, rigor, and cooperation in the field. Besides, the disconnect between data used in research and real-world clinical scenarios is an essential finding. In a clinical environment, both text and image-based information are often simultaneously available, making it crucial for ML models to adapt to such real-world complexities. The prevalence of models designed for binary classification, while suitable for emergency settings, reveals a limitation. Clinical decision-making is a complex process that often involves navigating numerous potential diseases, each with unique characteristics, presentations, and treatment considerations. The overreliance on binary classification fails to capture this richness and underscores the need for more nuanced approaches. Furthermore, the observation that only two models have been effectively implemented in clinical practice highlights the gap between research findings and practical implementation. This finding underscores the challenges in translating scientific progress into real-world healthcare contexts. It draws attention to the necessity of comprehensive validation, addressing regulatory considerations, and managing the integration of new technologies into existing clinical workflows . Additionally, building trust in AI systems is a crucial scientific contribution. The employment of XAI models in 15 reviewed papers demonstrates a proactive effort to enhance transparency and accountability. XAI provides insights into the underlying features, variables, or patterns that contribute to the model's decision-making process, enabling clinicians to comprehend and validate outputs and allay their wariness . This multi-dimensional approach acknowledges the technical and human factors critical for AI's successful implementation in healthcare. Regarding the second research question, two main comparators were used to evaluate clinical utility: clinicians and routine clinical scoring systems and tools, with only one study adopting both types of comparative analyses . An important consideration is the presence of a wide inter- and intra-variability in the number of included clinicians. While 499 medical professionals were identified across the reviewed studies, it is crucial to note that the distribution was heavily skewed. Specifically, only six studies involved a substantial number of clinicians (twenty or more). At the same time, eleven included a moderate number (between five and eighteen), and most had a considerably smaller sample size (four or fewer clinicians, 24 studies). Furthermore, the observed variability underscores the importance of reporting detailed clinician characteristics. Although the number of clinicians was reported in the studies, there was limited information regarding their specific backgrounds, years of experience, and areas of specialization. Of the 41 studies comparing models against clinicians, eleven did not report years of experience, and ten only reported rank. Clinician expertise and experience can significantly influence diagnostic accuracy and decision-making outcomes, so studies with limited physicians with unreported proficiency may be more susceptible to bias and not fully encompass the full spectrum of clinical decision-making . Besides, none of the comparisons were carried out in randomized trials, which is the most accurate way of testing utility . Clinical utility was assessed mainly by comparing model and clinician performance separately, intended to evaluate each entity’s capabilities independently and capture the variations in clinical decision-making among different individuals or groups. Although helpful in calculating inter- and intra-observer variability, this approach may overlook the interaction dynamics between AI and clinicians and not fully reflect the complexities and challenges of real-world clinical practice. Conversely, performance with and without AI assistance was evaluated in ten papers, which helps discern the unique contributions of AI in terms of augmenting clinician judgment, providing additional insights, or improving efficiency. In addition, sixteen studies benchmarked the clinical utility of machine learning models against twelve commonly used clinical tools. Although more prone to bias and less generalizable , this type of comparison provides a uniform reference point for evaluating performance, assessing the practical impact and potential improvements the new method offers over the current standard of care. There is also a clear need for more comprehensive and standardized research in clinical utility, fostering a more effective and seamless integration of AI into healthcare decision-making. Future studies should strive for a more inclusive representation of clinicians, prioritize randomized trials for robust validation, and aim for a thorough understanding of how AI can complement and enhance human expertise. Answering the third research question involves the reported performance during both external validation and clinical utility assessment. The impressive performance of CNNs across various cancer types presents a vital scientific contribution. Their consistently high performance underscores their reputation as a powerful tool in patient-focused cancer research. Additionally, the strong performance of Gradient and Decision Tree-based algorithms in diverse cancer-related tasks reveals an underrepresented facet of ML research. This finding highlights an opportunity to explore and evaluate different ML approaches in oncology applications. The variability in reporting discrimination metrics and calibration metrics, while illuminating the diversity of evaluation methods, raises a critical concern. The lack of standardization hampers the reliability and accuracy of risk assessments , emphasizing the need for consistency in reporting and metrics. In assessing clinical utility, the notable superiority of ML models over clinical tools marks a significant scientific advancement. These findings signify the potential for ML to enhance clinical decision-making processes significantly. However, they also reveal that ML models have not yet reached the same level of expertise as human clinicians in certain aspects, pointing to a collaborative approach where AI systems complement and support clinicians rather than replace them. This collaborative model could offer a path forward to augmenting healthcare capabilities. Finally, six main research gaps were found throughout the review. First, although common cancers were extensively studied in adults, metastases, rare tumors, and different age groups were only investigated in five , three , and two papers, respectively. For example, an evident instance of the limited research focus on rare tumors can be observed in the absence of studies examining breast cancers in men. This paucity might be attributable to insufficient publicly available data, the high cost of collecting new data in bulk , and scarce interaction between medical centers. Second, most models were developed for diagnosis, outcome predictions, or risk stratification, while studies on optimal treatment and drug administration options are still lacking. Third, most studies were retrospective with small sample sizes, thus requiring further prospective validations in diverse patient populations to ensure generalizability. Fourth, none of the image-based studies addressed low-quality images; this is essential for real-world clinical applications, as not all images obtained in practice may be optimal. Fifth, no study assessed utility on patient outcomes, which is not only the ultimate goal but also required by insurance coverage and crucial for determining actual clinical utility and effectiveness. Sixth and last, the absence of studies involving digital twins – even during abstract inspection – is worth mentioning. Further exploration of ML models with these virtual replica technologies could provide meaningful contributions to their application in clinical practice. Likewise, these gaps could be bridged by encouraging and implementing collaboration in healthcare, as merging – ideally, at an international level – information from several institutes would result in more comprehensive data, less bias from country-specific patients and treatment recommendations and larger sample sizes, and consequently, in a higher generalization capacity, and even faster and more accurate diagnoses and treatment decisions. This research stands out for its inclusivity, encompassing diverse patient populations, ML algorithms, and hospital settings, enhancing the applicability of its findings. Its contributions lie in systematically exploring external validation and clinical utility evaluation for ML algorithms, bridging the gap between AI researchers and medical professionals. Lastly, this work highlights the paramount significance of the synergy between AI researchers and medical practitioners. Interdisciplinary collaboration is foundational for promoting the adoption of AI technologies in healthcare and enhancing their scientific and clinical contributions. It ensures that research is translated into innovative, hands-on solutions that align with clinical needs and standards, disease management, and clinical decision-making and positively impact patient care. Study limitations Despite the valuable insights gained from this study, it is essential to acknowledge its limitations. First, relevant studies might have been missed despite efforts to design a comprehensive search strategy and the inclusion of databases from different research fields. For example, sequencing, omics, and molecular biomarker discovery studies were excluded from this review. Notwithstanding their critical role in advancing personalized medicine, genomic, transcriptomic, and proteomic approaches still face obstacles to widespread clinical adoption due to their complexity, the specialized analytical skills required, the need for substantial adjustments in clinical workflows, and significant regulatory challenges . Given these constraints, this review emphasized machine learning algorithms immediately employable in clinical operations, ensuring research is relevant and actionable within healthcare settings. However, this selection reflects a limitation. While narrowing the focus to technologies with broader immediate applicability, not incorporating genetics and omics studies may have inadvertently excluded a subset of literature that explicitly investigates the interplay between genetic factors, treatment regimens, and therapeutic responses, offering a potential explanation for the absence of papers exploring drug and treatment responses and digital twin approaches. Second, this review did not extensively cover the emerging challenges and opportunities of stringent data protection laws, notably the potential for synthetic data in research. While this exclusion aimed at evaluating model performance in genuine patient data, thereby accounting for the complexities and variabilities inherent in healthcare, synthetic data offers a promising avenue for navigating privacy concerns and enhancing dataset diversity . Hence, its absence marks a limitation, reflecting areas beyond the immediate scope of this review yet critical for the future of ML applications in oncology. Third, although the review revealed mostly positive results highlighting ML’s promise, the risk of publication bias cannot be discarded, as studies with positive or significant findings are more likely to be published than those with unfavorable or nonsignificant verdicts . Similarly, the emphasis on SJR as a quality measure, while aiming to ensure the inclusion of high-impact research, acknowledges the potential oversight of specialized, significant studies that might not yet have achieved wide recognition but contribute meaningfully to the field. Furthermore, the selection process did not extend to evaluating the methodological quality or risk of bias within the included studies, potentially limiting the ability to characterize the overall strength of the evidence. Fourth, a significant portion of the studies was retrospective, which increases susceptibility to selection bias and data quality concerns compared to prospective analyses, which may affect the conclusions' robustness . Small sample sizes and the lack of diversity within study populations further challenge the findings' generalizability, emphasizing the need for broader testing of machine learning models across diverse clinical contexts . Additionally, external validation and clinical utility evaluations, often conducted within restricted scopes, may fail to represent the complexities encountered in real-world healthcare settings fully. This limitation suggests that the current body of research may not adequately reflect the potential challenges and applicability of machine learning solutions across the healthcare spectrum, restricting extrapolations. Lastly, a notable methodological concern within the broader field, rather than this review alone, is the variability in performance metrics and a lack of standardized reporting practices across studies. This inconsistency hinders direct comparisons between research outcomes, urging standardized reporting guidelines to facilitate a more effective synthesis of research findings and accurately evaluate the progress of ML-based applications in oncology. Conclusions Although facing challenges primarily tied to data availability and quality, machine learning models, with CNNs in the forefront, have consistently demonstrated substantial potential to revolutionize modern medicine and ultimately improve overall healthcare quality. These models have been especially impactful in lung, colorectal, gastric, bone, and breast cancers, offering a promising pathway for clinicians to make more accurate and personalized clinical decisions and reducing the need for invasive procedures. For instance, in lung cancer, CNNs have enhanced lesion detection, while in colorectal cancers, they have improved early neoplasm detection. Gastric cancer research has also benefited from AI’s ability to diagnose and predict treatment responses, offering new avenues for patient care. Similarly, using ML in breast cancer resulted in streamlined screening processes, and in bone cancer, these algorithms assisted in distinguishing benign from malignant lesions, allowing for earlier detection and treatment. However, the path to fully leveraging ML in oncology highlights a pronounced need for model sensitivity and specificity refinement. Minimizing false positives and negatives is critical, particularly for cancers with intricate presentation patterns. Furthermore, our findings reveal a substantial gap in addressing less common and rare cancers, rising an imperative for the research community to extend its investigative efforts. By broadening the application of ML technologies to encompass these lesser-studied cancers, there is an opportunity to deepen their understanding and craft more inclusive and precise diagnostic and therapeutic approaches, thereby maximizing AI's impact across the full spectrum of oncological patient care. Moving forward, we propose a comprehensive roadmap to guide the implementation of AI in clinical settings. The initial step involves standardized data collection and curation, emphasizing the creation of diverse, well-annotated datasets that accurately represent the complexity of real-world clinical scenarios. These datasets ensure consistency and reliability in model performance across various studies and healthcare institutions. The subsequent stages are centered around the rigorous development, external validation, and utility testing of AI models, placing a premium on homogeneity in discrimination and calibration metrics, robustness, and generalizability. The developed models should be integrated into clinical workflows in close collaboration with healthcare professionals, and ongoing training programs should be implemented to enhance their understanding of AI concepts. Simultaneously, establishing frameworks that address ethical governance, privacy protection, and regulatory compliance is crucial for navigating the legal and ethical considerations associated with AI implementation and promoting data sharing. Finally, fostering a culture of continuous improvement is essential, where AI models are regularly updated and refined based on feedback from clinicians, new data, and advancements in the field. In conclusion, this review issues a resounding call for collective action from oncology stakeholders – clinicians, researchers, policymakers, and healthcare institutions. The findings reinforce the pressing need to fully embrace machine learning as an asset for patient-centered cancer research and decision-making. In this cooperative endeavor, it is imperative to ensure equitable access to high-quality data, engage in large-scale prospective studies, and foster international collaboration for the robust validation of AI models across diverse patient populations. Furthermore, prioritizing investments in transparency, explainability, and the ongoing refinement of AI algorithms is paramount to achieving clinical utility. The dawn of realizing the full potential of medical AI is upon us, and this journey mandates an unwavering commitment to ethics and an unceasing quest for progress. The future of cancer care beckons, and it's our collective responsibility to answer that call. The findings from this scoping review reveal several critical insights into the landscape of ML and DL applications in cancer-patient-related decision-making. A notable prominent trend is their increasing recognition and interest. The dominance of papers focused on patients and medical issues (versus computational journals, Fig. A) highlights this growing enthusiasm and a strong emphasis on tackling clinical challenges and reflects a paradigmatic transition from theoretical and computational considerations toward practical, patient-oriented solutions. This is underscored by the significant rise in relevant sources after 2018, particularly in 2020, 2021, and 2022 (Fig. B). However, it's crucial to note that many papers were excluded due to insufficient external validation and clinical utility assessment (Fig. ), showing that the model development and testing methodology still lack standardization, which agrees with the literature . These observations collectively emphasize the evolution and maturation of the field, yet they also serve as a call to action for enhancing the methodological rigor of research endeavors. Concerning the first research question, we found that CNNs have risen to prominence and are now the backbone of most research initiatives (33/56 papers). Random Forests and XGBoost, while less common, still played significant roles, featuring in 7/56 and 6/56 of the studies, respectively, adding diversity to the oncology decision-making landscape. While lung cancer and digestive system assessments were the primary focus, these algorithms demonstrated versatile applicability across various cancer types. Moreover, the emphasis on image-based analyses reflects the potential of ML in augmenting the accuracy of diagnostic processes. However, the limited attention to risk stratification and pharmacotherapy research is a notable caveat. Likewise, the underutilization of radiomics in image studies indicates a missed opportunity. Incorporating radiomics can provide a wealth of information about tumor characteristics and heterogeneity, enriching our understanding and predictive capabilities in oncology. These are areas where ML can make significant contributions to the field, highlighting future directions for research and untapped potential for exploring alternative methodologies. Indeed, methodological considerations highlight several areas that demand attention. The simultaneous development and validation of models in most papers could potentially introduce partiality . Further, the limited sample sizes in many studies, with the majority involving fewer than 200 patients, raise concerns about the generalizability and robustness of these models . Equally, except for three prospective studies and four pieces of research encompassing both retrospective and prospective datasets, the selected papers were mainly retrospective (49/56), a less rigorous design potentially lowering data quality and compromising reliability . Nonetheless, in contrast to previous reviews , we witnessed a substantial increase in multi-institutional studies, marking a positive transformation in the landscape of oncological research. The shift towards collaborative efforts involving multiple centers brings diversity to the study populations, which is critical for generalizing findings to broader patient groups and instilling confidence in research outcomes. Collaborative research involving several institutions augments resources, expertise, and data access, offering a deeper understanding of research questions. However, the infrequent international validation and the paucity of data and code sharing in multi-institutional studies present substantial hurdles. These challenges obstruct the path to enhanced reproducibility and collaborative progress. Scientifically, they emphasize the importance of standardizing data-sharing practices and code accessibility to facilitate transparency, rigor, and cooperation in the field. Besides, the disconnect between data used in research and real-world clinical scenarios is an essential finding. In a clinical environment, both text and image-based information are often simultaneously available, making it crucial for ML models to adapt to such real-world complexities. The prevalence of models designed for binary classification, while suitable for emergency settings, reveals a limitation. Clinical decision-making is a complex process that often involves navigating numerous potential diseases, each with unique characteristics, presentations, and treatment considerations. The overreliance on binary classification fails to capture this richness and underscores the need for more nuanced approaches. Furthermore, the observation that only two models have been effectively implemented in clinical practice highlights the gap between research findings and practical implementation. This finding underscores the challenges in translating scientific progress into real-world healthcare contexts. It draws attention to the necessity of comprehensive validation, addressing regulatory considerations, and managing the integration of new technologies into existing clinical workflows . Additionally, building trust in AI systems is a crucial scientific contribution. The employment of XAI models in 15 reviewed papers demonstrates a proactive effort to enhance transparency and accountability. XAI provides insights into the underlying features, variables, or patterns that contribute to the model's decision-making process, enabling clinicians to comprehend and validate outputs and allay their wariness . This multi-dimensional approach acknowledges the technical and human factors critical for AI's successful implementation in healthcare. Regarding the second research question, two main comparators were used to evaluate clinical utility: clinicians and routine clinical scoring systems and tools, with only one study adopting both types of comparative analyses . An important consideration is the presence of a wide inter- and intra-variability in the number of included clinicians. While 499 medical professionals were identified across the reviewed studies, it is crucial to note that the distribution was heavily skewed. Specifically, only six studies involved a substantial number of clinicians (twenty or more). At the same time, eleven included a moderate number (between five and eighteen), and most had a considerably smaller sample size (four or fewer clinicians, 24 studies). Furthermore, the observed variability underscores the importance of reporting detailed clinician characteristics. Although the number of clinicians was reported in the studies, there was limited information regarding their specific backgrounds, years of experience, and areas of specialization. Of the 41 studies comparing models against clinicians, eleven did not report years of experience, and ten only reported rank. Clinician expertise and experience can significantly influence diagnostic accuracy and decision-making outcomes, so studies with limited physicians with unreported proficiency may be more susceptible to bias and not fully encompass the full spectrum of clinical decision-making . Besides, none of the comparisons were carried out in randomized trials, which is the most accurate way of testing utility . Clinical utility was assessed mainly by comparing model and clinician performance separately, intended to evaluate each entity’s capabilities independently and capture the variations in clinical decision-making among different individuals or groups. Although helpful in calculating inter- and intra-observer variability, this approach may overlook the interaction dynamics between AI and clinicians and not fully reflect the complexities and challenges of real-world clinical practice. Conversely, performance with and without AI assistance was evaluated in ten papers, which helps discern the unique contributions of AI in terms of augmenting clinician judgment, providing additional insights, or improving efficiency. In addition, sixteen studies benchmarked the clinical utility of machine learning models against twelve commonly used clinical tools. Although more prone to bias and less generalizable , this type of comparison provides a uniform reference point for evaluating performance, assessing the practical impact and potential improvements the new method offers over the current standard of care. There is also a clear need for more comprehensive and standardized research in clinical utility, fostering a more effective and seamless integration of AI into healthcare decision-making. Future studies should strive for a more inclusive representation of clinicians, prioritize randomized trials for robust validation, and aim for a thorough understanding of how AI can complement and enhance human expertise. Answering the third research question involves the reported performance during both external validation and clinical utility assessment. The impressive performance of CNNs across various cancer types presents a vital scientific contribution. Their consistently high performance underscores their reputation as a powerful tool in patient-focused cancer research. Additionally, the strong performance of Gradient and Decision Tree-based algorithms in diverse cancer-related tasks reveals an underrepresented facet of ML research. This finding highlights an opportunity to explore and evaluate different ML approaches in oncology applications. The variability in reporting discrimination metrics and calibration metrics, while illuminating the diversity of evaluation methods, raises a critical concern. The lack of standardization hampers the reliability and accuracy of risk assessments , emphasizing the need for consistency in reporting and metrics. In assessing clinical utility, the notable superiority of ML models over clinical tools marks a significant scientific advancement. These findings signify the potential for ML to enhance clinical decision-making processes significantly. However, they also reveal that ML models have not yet reached the same level of expertise as human clinicians in certain aspects, pointing to a collaborative approach where AI systems complement and support clinicians rather than replace them. This collaborative model could offer a path forward to augmenting healthcare capabilities. Finally, six main research gaps were found throughout the review. First, although common cancers were extensively studied in adults, metastases, rare tumors, and different age groups were only investigated in five , three , and two papers, respectively. For example, an evident instance of the limited research focus on rare tumors can be observed in the absence of studies examining breast cancers in men. This paucity might be attributable to insufficient publicly available data, the high cost of collecting new data in bulk , and scarce interaction between medical centers. Second, most models were developed for diagnosis, outcome predictions, or risk stratification, while studies on optimal treatment and drug administration options are still lacking. Third, most studies were retrospective with small sample sizes, thus requiring further prospective validations in diverse patient populations to ensure generalizability. Fourth, none of the image-based studies addressed low-quality images; this is essential for real-world clinical applications, as not all images obtained in practice may be optimal. Fifth, no study assessed utility on patient outcomes, which is not only the ultimate goal but also required by insurance coverage and crucial for determining actual clinical utility and effectiveness. Sixth and last, the absence of studies involving digital twins – even during abstract inspection – is worth mentioning. Further exploration of ML models with these virtual replica technologies could provide meaningful contributions to their application in clinical practice. Likewise, these gaps could be bridged by encouraging and implementing collaboration in healthcare, as merging – ideally, at an international level – information from several institutes would result in more comprehensive data, less bias from country-specific patients and treatment recommendations and larger sample sizes, and consequently, in a higher generalization capacity, and even faster and more accurate diagnoses and treatment decisions. This research stands out for its inclusivity, encompassing diverse patient populations, ML algorithms, and hospital settings, enhancing the applicability of its findings. Its contributions lie in systematically exploring external validation and clinical utility evaluation for ML algorithms, bridging the gap between AI researchers and medical professionals. Lastly, this work highlights the paramount significance of the synergy between AI researchers and medical practitioners. Interdisciplinary collaboration is foundational for promoting the adoption of AI technologies in healthcare and enhancing their scientific and clinical contributions. It ensures that research is translated into innovative, hands-on solutions that align with clinical needs and standards, disease management, and clinical decision-making and positively impact patient care. Despite the valuable insights gained from this study, it is essential to acknowledge its limitations. First, relevant studies might have been missed despite efforts to design a comprehensive search strategy and the inclusion of databases from different research fields. For example, sequencing, omics, and molecular biomarker discovery studies were excluded from this review. Notwithstanding their critical role in advancing personalized medicine, genomic, transcriptomic, and proteomic approaches still face obstacles to widespread clinical adoption due to their complexity, the specialized analytical skills required, the need for substantial adjustments in clinical workflows, and significant regulatory challenges . Given these constraints, this review emphasized machine learning algorithms immediately employable in clinical operations, ensuring research is relevant and actionable within healthcare settings. However, this selection reflects a limitation. While narrowing the focus to technologies with broader immediate applicability, not incorporating genetics and omics studies may have inadvertently excluded a subset of literature that explicitly investigates the interplay between genetic factors, treatment regimens, and therapeutic responses, offering a potential explanation for the absence of papers exploring drug and treatment responses and digital twin approaches. Second, this review did not extensively cover the emerging challenges and opportunities of stringent data protection laws, notably the potential for synthetic data in research. While this exclusion aimed at evaluating model performance in genuine patient data, thereby accounting for the complexities and variabilities inherent in healthcare, synthetic data offers a promising avenue for navigating privacy concerns and enhancing dataset diversity . Hence, its absence marks a limitation, reflecting areas beyond the immediate scope of this review yet critical for the future of ML applications in oncology. Third, although the review revealed mostly positive results highlighting ML’s promise, the risk of publication bias cannot be discarded, as studies with positive or significant findings are more likely to be published than those with unfavorable or nonsignificant verdicts . Similarly, the emphasis on SJR as a quality measure, while aiming to ensure the inclusion of high-impact research, acknowledges the potential oversight of specialized, significant studies that might not yet have achieved wide recognition but contribute meaningfully to the field. Furthermore, the selection process did not extend to evaluating the methodological quality or risk of bias within the included studies, potentially limiting the ability to characterize the overall strength of the evidence. Fourth, a significant portion of the studies was retrospective, which increases susceptibility to selection bias and data quality concerns compared to prospective analyses, which may affect the conclusions' robustness . Small sample sizes and the lack of diversity within study populations further challenge the findings' generalizability, emphasizing the need for broader testing of machine learning models across diverse clinical contexts . Additionally, external validation and clinical utility evaluations, often conducted within restricted scopes, may fail to represent the complexities encountered in real-world healthcare settings fully. This limitation suggests that the current body of research may not adequately reflect the potential challenges and applicability of machine learning solutions across the healthcare spectrum, restricting extrapolations. Lastly, a notable methodological concern within the broader field, rather than this review alone, is the variability in performance metrics and a lack of standardized reporting practices across studies. This inconsistency hinders direct comparisons between research outcomes, urging standardized reporting guidelines to facilitate a more effective synthesis of research findings and accurately evaluate the progress of ML-based applications in oncology. Although facing challenges primarily tied to data availability and quality, machine learning models, with CNNs in the forefront, have consistently demonstrated substantial potential to revolutionize modern medicine and ultimately improve overall healthcare quality. These models have been especially impactful in lung, colorectal, gastric, bone, and breast cancers, offering a promising pathway for clinicians to make more accurate and personalized clinical decisions and reducing the need for invasive procedures. For instance, in lung cancer, CNNs have enhanced lesion detection, while in colorectal cancers, they have improved early neoplasm detection. Gastric cancer research has also benefited from AI’s ability to diagnose and predict treatment responses, offering new avenues for patient care. Similarly, using ML in breast cancer resulted in streamlined screening processes, and in bone cancer, these algorithms assisted in distinguishing benign from malignant lesions, allowing for earlier detection and treatment. However, the path to fully leveraging ML in oncology highlights a pronounced need for model sensitivity and specificity refinement. Minimizing false positives and negatives is critical, particularly for cancers with intricate presentation patterns. Furthermore, our findings reveal a substantial gap in addressing less common and rare cancers, rising an imperative for the research community to extend its investigative efforts. By broadening the application of ML technologies to encompass these lesser-studied cancers, there is an opportunity to deepen their understanding and craft more inclusive and precise diagnostic and therapeutic approaches, thereby maximizing AI's impact across the full spectrum of oncological patient care. Moving forward, we propose a comprehensive roadmap to guide the implementation of AI in clinical settings. The initial step involves standardized data collection and curation, emphasizing the creation of diverse, well-annotated datasets that accurately represent the complexity of real-world clinical scenarios. These datasets ensure consistency and reliability in model performance across various studies and healthcare institutions. The subsequent stages are centered around the rigorous development, external validation, and utility testing of AI models, placing a premium on homogeneity in discrimination and calibration metrics, robustness, and generalizability. The developed models should be integrated into clinical workflows in close collaboration with healthcare professionals, and ongoing training programs should be implemented to enhance their understanding of AI concepts. Simultaneously, establishing frameworks that address ethical governance, privacy protection, and regulatory compliance is crucial for navigating the legal and ethical considerations associated with AI implementation and promoting data sharing. Finally, fostering a culture of continuous improvement is essential, where AI models are regularly updated and refined based on feedback from clinicians, new data, and advancements in the field. In conclusion, this review issues a resounding call for collective action from oncology stakeholders – clinicians, researchers, policymakers, and healthcare institutions. The findings reinforce the pressing need to fully embrace machine learning as an asset for patient-centered cancer research and decision-making. In this cooperative endeavor, it is imperative to ensure equitable access to high-quality data, engage in large-scale prospective studies, and foster international collaboration for the robust validation of AI models across diverse patient populations. Furthermore, prioritizing investments in transparency, explainability, and the ongoing refinement of AI algorithms is paramount to achieving clinical utility. The dawn of realizing the full potential of medical AI is upon us, and this journey mandates an unwavering commitment to ethics and an unceasing quest for progress. The future of cancer care beckons, and it's our collective responsibility to answer that call. Additional file 1. Protocol. This document presents the protocol developed for the scoping review. Additional file 2. PRISMA 2020 Checklist. This file contains the completed PRISMA 2020 checklist documenting the reporting of the scoping review methodology and findings. Additional file 3. Search Strategy. This document details the complete search strategy and database-specific filters applied in the scoping review. Additional file 4. Ranking Filter. This document contains the Python-based ranking filter developed to filter journals based on SCImago Journal Rank metrics. Additional file 5. Data Charting. This spreadsheet presents the comprehensive data extraction and charting results from the articles selected for inclusion in the scoping review. |
The role of peer support in coping and adjustment to dialysis and transplantation: Study protocol | 677c85ea-e713-435d-8b75-24b33e320964 | 11809911 | Surgical Procedures, Operative[mh] | Kidney replacement therapies (KRT) (haemodialysis, peritoneal dialysis and transplantation), are intensive and life changing, requiring multiple adaptations to a person’s lifestyle . Some people successfully adjust to treatments; others experience poor psychological outcomes. Many people undergoing dialysis report experiencing emotional distress, fear, anxiety, depression, loss, uncertainty, regret, guilt, and find the treatment burdensome . Kidney transplant recipients may also experience fear and anxiety, difficulties adjusting to continued treatment burden, frustration, and disillusionment when post-transplant recovery and quality of life do not match expectations . In the United Kingdom (UK) people approaching the need for KRT receive specialist kidney care including the provision of information and support by health professionals to help them make optimal decisions about kidney replacement therapies . However, people report reluctance to accept the need for treatment, perceive a lack of choice, hold unrealistic expectations about prognosis and/or quality-of-life, and desire more psychosocial information . It is suggested that regret experienced after dialysis initiation may be due to a mis-match between people’s expectations and subsequent experience . Peer support is an innovative, policy-advocated method of providing informational, emotional and appraisal support . Peer support involves people with kidney disease gaining an understanding from others with the same illness about their lived illness experience . It may, when provided as an adjunct to education provided by health professionals, help people develop more realistic expectations of kidney replacement therapies and thus adjust better to, and be more satisfied with treatment . Peer support is valued by people with kidney disease, providing encouragement, empathy, confidence, reassurance, and hope . It uniquely utilizes layman’s terms to communicate health information, helping people understand the patient perspective of KRT and psychosocial consequences that may not be appreciated or conveyed by healthcare practitioners . However, peer support provision across the UK is not routinely offered through kidney services; only 25% of units provide ‘formal’ support (governed service provided by trained peers). Little is known about mechanisms that make peer support ‘good’ or successful . It is suggested, but poorly evidenced, that formal support from trained peers is better than informal support received from untrained peers (encountered incidentally in waiting rooms or social media) because trained peers are able to build rapport and less likely to present exaggerated, unbalanced, scary, information . Also unknown is how the similarity or differences between supporter and peer support recipient (on dimensions such as age, gender, and ethnicity) influence its outcomes . It is hypothesised that the greater the similarity, the more empathy, trust, and role-modelling can occur, and therefore the greater the benefits. This may be particularly relevant for minority groups. People with kidney failure desire culturally and linguistically appropriate treatment information . Peer support is uniquely placed to provide culturally relevant information by targeting it towards specific ethnic minority groups, and by matching supporters and recipients from local backgrounds . A trial of peer support for people receiving haemodialysis showed that it preferentially benefitted those from ethnic minority groups . Low health literacy is common in people with kidney failure and is associated with poor knowledge about kidney disease, self-management behaviours and health‐related quality-of-life . Peer support can also help improve people’s understanding of health information by providing information in an accessible, patient centred and relatable format . Whilst developing our study, we conducted a focus group as part of our patient and public involvement (PPI) activities, to learn how well people felt they were prepared for KRT. Participants identified that a) people with kidney failure who are unwell when they start treatment have little time to contemplate what dialysis would entail and prepare for changes, b) people with kidney failure who are able to talk with other people whilst having treatment, are more likely to cope better, c) some people struggle to cope with the adjustments required to fit dialysis into daily life, and d) some people are fearful about the impact that treatment could have on family relationships. The PPI participants also discussed the associated ‘treatment burden’ of dialysis and felt that this aspect of treatment was not adequately addressed in current practice. For example, they would have valued more support on specific topics such as loss of libido and body image, to help prepare them for the lived experience of treatment. These findings support the theoretical basis for our research, namely that standard care insufficiently prepares people for the lived reality of life on KRT and that peer support may be an acceptable and successful intervention to make dialysis and transplant more tolerable and easy to live with. Understanding the utility of peer support is important when assessing its impact, and the needs of services to integrate within their care pathway. International quality standards supporting healthcare decision making suggest whilst ‘patient narratives’ may improve health literacy, provide comfort and prepare people for treatment, they may bias people's decision making when they are deliberating between two or more treatment options . Better understanding of the mechanisms of successful peer support will facilitate optimal development of peer programmes and allocation of peer resources. Aim This research will identify active ingredients underpinning how peer support helps people adjust to kidney replacement therapies in order to design the most effective and efficient peer support programs. Objectives Develop an in-depth understanding of the mechanisms and impact of standard care, formal and informal peer support, by interviewing people who have received different amounts and types of peer support, pre and post dialysis or transplantation. Use survey methods to explore the impact of receiving peer support on people living with kidney disease experience, psychosocial and decision quality measures at two timepoints. Produce a report of our findings to help refine current and future peer support programs. This research will identify active ingredients underpinning how peer support helps people adjust to kidney replacement therapies in order to design the most effective and efficient peer support programs. Develop an in-depth understanding of the mechanisms and impact of standard care, formal and informal peer support, by interviewing people who have received different amounts and types of peer support, pre and post dialysis or transplantation. Use survey methods to explore the impact of receiving peer support on people living with kidney disease experience, psychosocial and decision quality measures at two timepoints. Produce a report of our findings to help refine current and future peer support programs. Design A mixed methods approach including two studies to address research objectives 1&2. This approach allows investigation from two non-competing perspectives, an in-depth qualitative analysis of people living with kidney disease lived experiences, and a broader quantitative understanding of the topic, with each methodological approach addressing the design limitations inherent with the other . The Good Reporting of A Mixed Methods Study (GRAMMS) guideline was followed . Patient and public involvement group A patient and public involvement (PPI) group has been convened and will provide input to all stages of the project including, developing the interview schedule, selecting appropriate survey measures, dissemination activities, and report writing. An individual with dialysis and transplant experience is a co-applicant and will participate in steering group meetings and provide feedback to the wider PPI group. Setting Recruitment for both studies will take place at Leeds Renal Unit which has ~ 400 advanced kidney care patients and King’s College Hospital, London which has ~550 advanced kidney care patients. These large inner-city hospitals include people with kidney disease from diverse social, religious, and cultural backgrounds. King’s College Hospital kidney unit has had an active, formal peer support service since 2006; Leeds does not; therefore, we will be recruiting from populations with different experiences of peer support. Materials Study materials include consent forms, patient information sheets, interview schedule (study 1), questionnaire (study 2). The interview schedule and questionnaire will be developed by the research team in consultation with a patient and public involvement (PPI) team, and guided by the research aim, previous research examining patients’ expectations and experiences of kidney disease and its treatments. Ethics and research governance approvals Local Research Ethics and Health Research Authority approval was granted by Health and Care Research Wales on 6th March, 2024 (IRAS project ID: 330749). Study 1 – in-depth interviews with people with kidney failure We will develop a detailed understanding of people’s pre-treatment expectations of, and goals of care; the lived experience of treatment after commencing dialysis/post-transplantation; differences between the two; and how standard care and peer support of different types might influence both expectations and experience of treatment. Sample size. There is no formal analysis to estimate sample size in qualitative methods. As a guide, using our prior experience of interviewing this population, we estimate that approximately 25–30 people with kidney failure will be a reasonable sample size to generate sufficient data for these research questions. We will interview the same people at two different points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and at Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), . From our previous experience we know that interviewing people with kidney failure, with its associated high mortality rate, means that we may not be able to follow up everyone at Time 2. In this instance, findings recorded at T1 would still be used in the analysis, and if neccessary we will recruit additional people at Time 2 only. Our experiences of recruiting/interviewing at two timepoints will be documented in the final report. Recruitment will be discontinued when saturation is reached, and the author judges that no more new themes are being generated from the data . At Time 1, adults with chronic kidney disease stages 4&5 (referred to herein as 'kidney failure') will be eligible to participate if they meet one of the following categories. Attending an Advanced Kidney Care Clinic and contemplating KRTs, For those recruited at Time 2 only: Receiving haemodialysis or peritoneal dialysis – up to 6 months after commencement, Up to 6 months post-transplantation including people with a working transplant and those with graft failure, Who have received more than one KRT. Purposive sampling will ensure participants are recruited into three groups of roughly the same size based on experience of peer support – none, informal and formal. At recruitment, a screening question i.e., whether or not they have talked to anyone who has lived with KRT, will identify people who have received none or informal peer support. Using medical records, we will identify people (King’s College Hospital) who have documented evidence of receiving formal peer support. There is no upper age limit for participation. Participants must be able to take written, informed consent and be cognitively capable of taking part in an interview. Recruitment. The research nurse will work with kidney care staff supporting people at different stages of the pathway i.e., pre-treatment – advanced kidney care clinic clinicians, and post-treatment – dialysis and transplant clinicians, to identify people meeting the eligibility criteria stated above. Our extensive experience of recruiting from kidney services suggests a flexible and where possible, personal approach will ensure a sufficient number of participants are recruited. The research nurse will approach people attending clinics (advanced kidney care, post-transplantation, and PD) or haemodialysis sessions and discuss the study, hand out a patient information sheet and provide the opportunity for study related questions. With permission, those people we have approached will be contacted by telephone a week later about their decision to participate. We will write to people who are identified by staff, but who we are unable to access via an outpatient clinic or on the ward and send an introductory letter and patient information sheet in the post and ask them to return a reply slip with their contact details, if they are willing to take part. Recruitment of people from ethnic minority backgrounds: We recognise that people taking part in kidney research studies are more likely to be more (health) literate, ‘white’ and from higher economic backgrounds. To help mitigate against this, we will undertake several steps. Ensuring smaller samples, particularly those in qualitative studies, are stratified on a number of criteria can prove challenging, however we will be mindful of, and guided by, local Trust and national databases e.g., renal registry, to improve the representativeness of our sample in terms of gender, age, ethnicity, and socio-economic status defined by postcode. The research nurse will offer to read through patient information and support survey completion for people who have difficulty reading, i.e., those with low literacy, eyesight problems, cannot read English. Learning from recent, successful research focussing on people with kidney disease from ethnic minority backgrounds , we will work with cultural improvement officers, interpreters, family members and members of the kidney team with similar backgrounds, to help boost recruitment of people from minority ethnic groups. Where necessary we will translate patient facing materials. Procedure. Pre-treatment interviews (T1) will include open-ended questions to understand people’s experience of adjusting to a diagnosis and/or treatment of kidney disease, expectations of KRT and goals of care, and experiences and impact of support from peers. Interview questions aimed at people on dialysis/post-transplant (T2) will explore how expectations of treatment match people’s lived experience, associated treatment burden, and any further experiences and impact of support from peers. Before commencing the interview, participants will have a further opportunity to ask study related questions and provide written consent to take part. Participants will be given the opportunity to have a relative or nurse present at interview. Permission will be sought from those taking part at T1, to be contacted 6 months later if they meet criteria for T2 interview i.e. have commenced KRT. Interviews will approximately 60 minutes and will be organised at participant’s convenience, either in home, at hospital, on the telephone, or using an online platform such as Microsoft Teams or Zoom. We will thank participants for their contribution to the research process by providing a £15 voucher. Transcription, data coding, and analysis. Audio recordings will be transcribed verbatim using standard protocols. Thematic analysis, taking account of the individuals narrative, will be used to analyse interview data with the support of NVivo (Version 20.1.6) to organise the analysis and allow sharing amongst team members. Analysis will be conducted using a critical realist approach, whereby it is acknowledged that an external reality exists that is knowable and that people’s experiences are subjective. An initial coding frame will be generated which will be refined as analysis of individuals accounts, and emergent codes are generated. Each interview will be coded using a mixed deductive and inductive coding frame; 10% of the interviews will be coded by AW and a PPI member to maximise validity and robustness. Where discrepancy exists, the coders will reach consensus by referring to a third member of the research team. A thematic map will be generated using the method of constant comparison to illustrate the relationships between and within themes with input from AW. Study 2 – patient outcome survey comparing experiences standard care and peer support Adopting a survey will allow a broad exploration of the impact on patient outcomes of receiving standard care compared to peer support. Using questionnaires, we will measure patient experience and psychological measures of coping and adjustment, treatment experience and satisfaction with peer support. Sample. As with study 1, we will recruit the same individuals at two points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), .The sample will be determined by the same eligibility criteria (clinical characteristics) and stratified (experience of peer support) as outlined for Study 1. Our sample size calculation is based on a population of 950 (drawn from advanced kidney care, patients at both sites – as outlined in the ‘setting’ section) allowing for a 5% margin of error with 90% confidence, suggests 212 patients will be sufficient . If necessary we will seek ethical approval to contact people with kidney failure via national charities and kidney patient organisations to reach the required sample size. Recruitment. Participants will be identified using the methods detailed for Study 1. Where recruitment is face-to-face, surveys will be provided for people to complete at their own convenience and return in a stamped addressed envelope. Those eligible to take part but who cannot be approached directly will receive a covering letter, questionnaire and stamped addressed envelope via the postal service. Permission will be sought from those taking part at T1, to be contacted approximately 4-6 months later to complete a further survey. We will telephone participants and remind them to complete the questionnaire two weeks after initial contact . Interview and survey data collection will overlap to ensure project milestones are met. Participants eligible to participate in both studies will be invited into either study at the discretion of the research nurse to ensure a representative and diverse sample. Materials. Questionnaires were developed in consultation with the PPI advisory group to ensure their acceptability and relevance. They include previously established and where possible, validated measures to assess characteristics of the patient sample [demographics, patient history, peer support experience and satisfaction, physical symptoms, patient experience and psychological measures of coping and adjustment [goals and expectations of treatment and care, quality-of-life treatment burden , To minimise questionnaire fatigue, we employ short-item questionnaires wherever possible. For example, the SURE measure assessing decisional conflict is 4 items. Procedure. Participants will be asked to complete a series of closed– and open-ended questions in the form of a questionnaire booklet. Survey completion will take place at a time and place convenient to the participant and take a maximum of 30 minutes. We will thank participants by giving them a £15 voucher for completing the survey. Participants will be required to return the survey using the postal service and a stamped addressed envelope provided by the research team. Data analysis. Descriptive statistics will summarize the sample characteristics. Multivariate analyses will look at differences in measures between groups by use of peer support/standard care. Repeated measures analyses will examine differences in experiences over time. Data will be managed using SPSS (Version 27). Objective 3 – study report A study summary report will be produced to summarise our findings and identify the active ingredients of successful peer support. The findings will be disseminated at conferences, press releases and via scientific publication as agreed by the steering group and PPI team. Dissemination to wider patient charities and networks will be facilitated by PPI representatives. Findings will inform the work of the UK peer support working group, which one steering group member chairs. A mixed methods approach including two studies to address research objectives 1&2. This approach allows investigation from two non-competing perspectives, an in-depth qualitative analysis of people living with kidney disease lived experiences, and a broader quantitative understanding of the topic, with each methodological approach addressing the design limitations inherent with the other . The Good Reporting of A Mixed Methods Study (GRAMMS) guideline was followed . A patient and public involvement (PPI) group has been convened and will provide input to all stages of the project including, developing the interview schedule, selecting appropriate survey measures, dissemination activities, and report writing. An individual with dialysis and transplant experience is a co-applicant and will participate in steering group meetings and provide feedback to the wider PPI group. Recruitment for both studies will take place at Leeds Renal Unit which has ~ 400 advanced kidney care patients and King’s College Hospital, London which has ~550 advanced kidney care patients. These large inner-city hospitals include people with kidney disease from diverse social, religious, and cultural backgrounds. King’s College Hospital kidney unit has had an active, formal peer support service since 2006; Leeds does not; therefore, we will be recruiting from populations with different experiences of peer support. Study materials include consent forms, patient information sheets, interview schedule (study 1), questionnaire (study 2). The interview schedule and questionnaire will be developed by the research team in consultation with a patient and public involvement (PPI) team, and guided by the research aim, previous research examining patients’ expectations and experiences of kidney disease and its treatments. Local Research Ethics and Health Research Authority approval was granted by Health and Care Research Wales on 6th March, 2024 (IRAS project ID: 330749). We will develop a detailed understanding of people’s pre-treatment expectations of, and goals of care; the lived experience of treatment after commencing dialysis/post-transplantation; differences between the two; and how standard care and peer support of different types might influence both expectations and experience of treatment. Sample size. There is no formal analysis to estimate sample size in qualitative methods. As a guide, using our prior experience of interviewing this population, we estimate that approximately 25–30 people with kidney failure will be a reasonable sample size to generate sufficient data for these research questions. We will interview the same people at two different points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and at Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), . From our previous experience we know that interviewing people with kidney failure, with its associated high mortality rate, means that we may not be able to follow up everyone at Time 2. In this instance, findings recorded at T1 would still be used in the analysis, and if neccessary we will recruit additional people at Time 2 only. Our experiences of recruiting/interviewing at two timepoints will be documented in the final report. Recruitment will be discontinued when saturation is reached, and the author judges that no more new themes are being generated from the data . At Time 1, adults with chronic kidney disease stages 4&5 (referred to herein as 'kidney failure') will be eligible to participate if they meet one of the following categories. Attending an Advanced Kidney Care Clinic and contemplating KRTs, For those recruited at Time 2 only: Receiving haemodialysis or peritoneal dialysis – up to 6 months after commencement, Up to 6 months post-transplantation including people with a working transplant and those with graft failure, Who have received more than one KRT. Purposive sampling will ensure participants are recruited into three groups of roughly the same size based on experience of peer support – none, informal and formal. At recruitment, a screening question i.e., whether or not they have talked to anyone who has lived with KRT, will identify people who have received none or informal peer support. Using medical records, we will identify people (King’s College Hospital) who have documented evidence of receiving formal peer support. There is no upper age limit for participation. Participants must be able to take written, informed consent and be cognitively capable of taking part in an interview. Recruitment. The research nurse will work with kidney care staff supporting people at different stages of the pathway i.e., pre-treatment – advanced kidney care clinic clinicians, and post-treatment – dialysis and transplant clinicians, to identify people meeting the eligibility criteria stated above. Our extensive experience of recruiting from kidney services suggests a flexible and where possible, personal approach will ensure a sufficient number of participants are recruited. The research nurse will approach people attending clinics (advanced kidney care, post-transplantation, and PD) or haemodialysis sessions and discuss the study, hand out a patient information sheet and provide the opportunity for study related questions. With permission, those people we have approached will be contacted by telephone a week later about their decision to participate. We will write to people who are identified by staff, but who we are unable to access via an outpatient clinic or on the ward and send an introductory letter and patient information sheet in the post and ask them to return a reply slip with their contact details, if they are willing to take part. Recruitment of people from ethnic minority backgrounds: We recognise that people taking part in kidney research studies are more likely to be more (health) literate, ‘white’ and from higher economic backgrounds. To help mitigate against this, we will undertake several steps. Ensuring smaller samples, particularly those in qualitative studies, are stratified on a number of criteria can prove challenging, however we will be mindful of, and guided by, local Trust and national databases e.g., renal registry, to improve the representativeness of our sample in terms of gender, age, ethnicity, and socio-economic status defined by postcode. The research nurse will offer to read through patient information and support survey completion for people who have difficulty reading, i.e., those with low literacy, eyesight problems, cannot read English. Learning from recent, successful research focussing on people with kidney disease from ethnic minority backgrounds , we will work with cultural improvement officers, interpreters, family members and members of the kidney team with similar backgrounds, to help boost recruitment of people from minority ethnic groups. Where necessary we will translate patient facing materials. Procedure. Pre-treatment interviews (T1) will include open-ended questions to understand people’s experience of adjusting to a diagnosis and/or treatment of kidney disease, expectations of KRT and goals of care, and experiences and impact of support from peers. Interview questions aimed at people on dialysis/post-transplant (T2) will explore how expectations of treatment match people’s lived experience, associated treatment burden, and any further experiences and impact of support from peers. Before commencing the interview, participants will have a further opportunity to ask study related questions and provide written consent to take part. Participants will be given the opportunity to have a relative or nurse present at interview. Permission will be sought from those taking part at T1, to be contacted 6 months later if they meet criteria for T2 interview i.e. have commenced KRT. Interviews will approximately 60 minutes and will be organised at participant’s convenience, either in home, at hospital, on the telephone, or using an online platform such as Microsoft Teams or Zoom. We will thank participants for their contribution to the research process by providing a £15 voucher. Transcription, data coding, and analysis. Audio recordings will be transcribed verbatim using standard protocols. Thematic analysis, taking account of the individuals narrative, will be used to analyse interview data with the support of NVivo (Version 20.1.6) to organise the analysis and allow sharing amongst team members. Analysis will be conducted using a critical realist approach, whereby it is acknowledged that an external reality exists that is knowable and that people’s experiences are subjective. An initial coding frame will be generated which will be refined as analysis of individuals accounts, and emergent codes are generated. Each interview will be coded using a mixed deductive and inductive coding frame; 10% of the interviews will be coded by AW and a PPI member to maximise validity and robustness. Where discrepancy exists, the coders will reach consensus by referring to a third member of the research team. A thematic map will be generated using the method of constant comparison to illustrate the relationships between and within themes with input from AW. There is no formal analysis to estimate sample size in qualitative methods. As a guide, using our prior experience of interviewing this population, we estimate that approximately 25–30 people with kidney failure will be a reasonable sample size to generate sufficient data for these research questions. We will interview the same people at two different points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and at Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), . From our previous experience we know that interviewing people with kidney failure, with its associated high mortality rate, means that we may not be able to follow up everyone at Time 2. In this instance, findings recorded at T1 would still be used in the analysis, and if neccessary we will recruit additional people at Time 2 only. Our experiences of recruiting/interviewing at two timepoints will be documented in the final report. Recruitment will be discontinued when saturation is reached, and the author judges that no more new themes are being generated from the data . At Time 1, adults with chronic kidney disease stages 4&5 (referred to herein as 'kidney failure') will be eligible to participate if they meet one of the following categories. Attending an Advanced Kidney Care Clinic and contemplating KRTs, For those recruited at Time 2 only: Receiving haemodialysis or peritoneal dialysis – up to 6 months after commencement, Up to 6 months post-transplantation including people with a working transplant and those with graft failure, Who have received more than one KRT. Purposive sampling will ensure participants are recruited into three groups of roughly the same size based on experience of peer support – none, informal and formal. At recruitment, a screening question i.e., whether or not they have talked to anyone who has lived with KRT, will identify people who have received none or informal peer support. Using medical records, we will identify people (King’s College Hospital) who have documented evidence of receiving formal peer support. There is no upper age limit for participation. Participants must be able to take written, informed consent and be cognitively capable of taking part in an interview. The research nurse will work with kidney care staff supporting people at different stages of the pathway i.e., pre-treatment – advanced kidney care clinic clinicians, and post-treatment – dialysis and transplant clinicians, to identify people meeting the eligibility criteria stated above. Our extensive experience of recruiting from kidney services suggests a flexible and where possible, personal approach will ensure a sufficient number of participants are recruited. The research nurse will approach people attending clinics (advanced kidney care, post-transplantation, and PD) or haemodialysis sessions and discuss the study, hand out a patient information sheet and provide the opportunity for study related questions. With permission, those people we have approached will be contacted by telephone a week later about their decision to participate. We will write to people who are identified by staff, but who we are unable to access via an outpatient clinic or on the ward and send an introductory letter and patient information sheet in the post and ask them to return a reply slip with their contact details, if they are willing to take part. Recruitment of people from ethnic minority backgrounds: We recognise that people taking part in kidney research studies are more likely to be more (health) literate, ‘white’ and from higher economic backgrounds. To help mitigate against this, we will undertake several steps. Ensuring smaller samples, particularly those in qualitative studies, are stratified on a number of criteria can prove challenging, however we will be mindful of, and guided by, local Trust and national databases e.g., renal registry, to improve the representativeness of our sample in terms of gender, age, ethnicity, and socio-economic status defined by postcode. The research nurse will offer to read through patient information and support survey completion for people who have difficulty reading, i.e., those with low literacy, eyesight problems, cannot read English. Learning from recent, successful research focussing on people with kidney disease from ethnic minority backgrounds , we will work with cultural improvement officers, interpreters, family members and members of the kidney team with similar backgrounds, to help boost recruitment of people from minority ethnic groups. Where necessary we will translate patient facing materials. Pre-treatment interviews (T1) will include open-ended questions to understand people’s experience of adjusting to a diagnosis and/or treatment of kidney disease, expectations of KRT and goals of care, and experiences and impact of support from peers. Interview questions aimed at people on dialysis/post-transplant (T2) will explore how expectations of treatment match people’s lived experience, associated treatment burden, and any further experiences and impact of support from peers. Before commencing the interview, participants will have a further opportunity to ask study related questions and provide written consent to take part. Participants will be given the opportunity to have a relative or nurse present at interview. Permission will be sought from those taking part at T1, to be contacted 6 months later if they meet criteria for T2 interview i.e. have commenced KRT. Interviews will approximately 60 minutes and will be organised at participant’s convenience, either in home, at hospital, on the telephone, or using an online platform such as Microsoft Teams or Zoom. We will thank participants for their contribution to the research process by providing a £15 voucher. Audio recordings will be transcribed verbatim using standard protocols. Thematic analysis, taking account of the individuals narrative, will be used to analyse interview data with the support of NVivo (Version 20.1.6) to organise the analysis and allow sharing amongst team members. Analysis will be conducted using a critical realist approach, whereby it is acknowledged that an external reality exists that is knowable and that people’s experiences are subjective. An initial coding frame will be generated which will be refined as analysis of individuals accounts, and emergent codes are generated. Each interview will be coded using a mixed deductive and inductive coding frame; 10% of the interviews will be coded by AW and a PPI member to maximise validity and robustness. Where discrepancy exists, the coders will reach consensus by referring to a third member of the research team. A thematic map will be generated using the method of constant comparison to illustrate the relationships between and within themes with input from AW. Adopting a survey will allow a broad exploration of the impact on patient outcomes of receiving standard care compared to peer support. Using questionnaires, we will measure patient experience and psychological measures of coping and adjustment, treatment experience and satisfaction with peer support. Sample. As with study 1, we will recruit the same individuals at two points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), .The sample will be determined by the same eligibility criteria (clinical characteristics) and stratified (experience of peer support) as outlined for Study 1. Our sample size calculation is based on a population of 950 (drawn from advanced kidney care, patients at both sites – as outlined in the ‘setting’ section) allowing for a 5% margin of error with 90% confidence, suggests 212 patients will be sufficient . If necessary we will seek ethical approval to contact people with kidney failure via national charities and kidney patient organisations to reach the required sample size. Recruitment. Participants will be identified using the methods detailed for Study 1. Where recruitment is face-to-face, surveys will be provided for people to complete at their own convenience and return in a stamped addressed envelope. Those eligible to take part but who cannot be approached directly will receive a covering letter, questionnaire and stamped addressed envelope via the postal service. Permission will be sought from those taking part at T1, to be contacted approximately 4-6 months later to complete a further survey. We will telephone participants and remind them to complete the questionnaire two weeks after initial contact . Interview and survey data collection will overlap to ensure project milestones are met. Participants eligible to participate in both studies will be invited into either study at the discretion of the research nurse to ensure a representative and diverse sample. Materials. Questionnaires were developed in consultation with the PPI advisory group to ensure their acceptability and relevance. They include previously established and where possible, validated measures to assess characteristics of the patient sample [demographics, patient history, peer support experience and satisfaction, physical symptoms, patient experience and psychological measures of coping and adjustment [goals and expectations of treatment and care, quality-of-life treatment burden , To minimise questionnaire fatigue, we employ short-item questionnaires wherever possible. For example, the SURE measure assessing decisional conflict is 4 items. Procedure. Participants will be asked to complete a series of closed– and open-ended questions in the form of a questionnaire booklet. Survey completion will take place at a time and place convenient to the participant and take a maximum of 30 minutes. We will thank participants by giving them a £15 voucher for completing the survey. Participants will be required to return the survey using the postal service and a stamped addressed envelope provided by the research team. Data analysis. Descriptive statistics will summarize the sample characteristics. Multivariate analyses will look at differences in measures between groups by use of peer support/standard care. Repeated measures analyses will examine differences in experiences over time. Data will be managed using SPSS (Version 27). As with study 1, we will recruit the same individuals at two points of the patient pathway: Time 1 - pre-treatment to ascertain views around expectations and goals of care (T1), and Time 2 - after commencing dialysis/post-transplantation about lived experience and treatment burden (T2), .The sample will be determined by the same eligibility criteria (clinical characteristics) and stratified (experience of peer support) as outlined for Study 1. Our sample size calculation is based on a population of 950 (drawn from advanced kidney care, patients at both sites – as outlined in the ‘setting’ section) allowing for a 5% margin of error with 90% confidence, suggests 212 patients will be sufficient . If necessary we will seek ethical approval to contact people with kidney failure via national charities and kidney patient organisations to reach the required sample size. Participants will be identified using the methods detailed for Study 1. Where recruitment is face-to-face, surveys will be provided for people to complete at their own convenience and return in a stamped addressed envelope. Those eligible to take part but who cannot be approached directly will receive a covering letter, questionnaire and stamped addressed envelope via the postal service. Permission will be sought from those taking part at T1, to be contacted approximately 4-6 months later to complete a further survey. We will telephone participants and remind them to complete the questionnaire two weeks after initial contact . Interview and survey data collection will overlap to ensure project milestones are met. Participants eligible to participate in both studies will be invited into either study at the discretion of the research nurse to ensure a representative and diverse sample. Questionnaires were developed in consultation with the PPI advisory group to ensure their acceptability and relevance. They include previously established and where possible, validated measures to assess characteristics of the patient sample [demographics, patient history, peer support experience and satisfaction, physical symptoms, patient experience and psychological measures of coping and adjustment [goals and expectations of treatment and care, quality-of-life treatment burden , To minimise questionnaire fatigue, we employ short-item questionnaires wherever possible. For example, the SURE measure assessing decisional conflict is 4 items. Participants will be asked to complete a series of closed– and open-ended questions in the form of a questionnaire booklet. Survey completion will take place at a time and place convenient to the participant and take a maximum of 30 minutes. We will thank participants by giving them a £15 voucher for completing the survey. Participants will be required to return the survey using the postal service and a stamped addressed envelope provided by the research team. Descriptive statistics will summarize the sample characteristics. Multivariate analyses will look at differences in measures between groups by use of peer support/standard care. Repeated measures analyses will examine differences in experiences over time. Data will be managed using SPSS (Version 27). A study summary report will be produced to summarise our findings and identify the active ingredients of successful peer support. The findings will be disseminated at conferences, press releases and via scientific publication as agreed by the steering group and PPI team. Dissemination to wider patient charities and networks will be facilitated by PPI representatives. Findings will inform the work of the UK peer support working group, which one steering group member chairs. Providing peer support in kidney units is increasingly popular, yet provision is inconsistent and generally low quality. At present, little is known about its utility in making dialysis and transplants more tolerable and easier to live with. Providing an evidence base for the use of peer support will a) provide the impetus to move the provision of peer support up the agenda with renal clinicians and commissioners, improve on the delivery of personalised care, and patient experience of care, and b) help guide the optimal development of peer support programmes and efficient allocation of peer resources as we see the development of regional networks for this popular quality improvement initiative . S1 File PS_questionnaire v.5 11112024_T1. Pre-treatment questionnaire, Time 1. (DOCX) S2 File PS_questionnaire v.5 11112024_T2. Post-treatment questionnaire, Time 2. (DOCX) S3 File Peer support_interview schedule Time 1 v.2 27.02.2024. Pre-treatment interview schedule, Time 1. (DOCX) S4 File Peer support_interview schedule Time 2 v.2 27.02.2024. Pre-treatment interview schedule, Time 2. (DOCX) |
Editorial: Insights in developmental endocrinology: 2023 | 9a2f681f-8e12-4dd3-90af-e8f557a42592 | 11324551 | Internal Medicine[mh] | The wisdom of the body perspective transcends our current human understanding and is a call for more innovative biomedical research. Developmental endocrinology is integrative biology, involving the concept of homeostasis, and the elegant underpinnings of life itself . Developmental endocrinology involves the intricate relationship between maternal nutrition and offspring health and this has been the subject of extensive research and scientific inquiry . Miles et al. , in a mouse model, investigate the effects of maternal caloric restriction in mid-gestation and lactation on neonatal development and adult metabolic function in response to a high-fat diet. Studies investigating the impact of maternal caloric restriction during specific stages of gestation and lactation shed light on the long-term implications for offspring health and adult metabolic function. Exploring gene expression and developmental endocrinology in response to maternal undernutrition stresses the importance of the interplay between maternal health and offspring health outcomes . These findings underscore the critical importance of early developmental stages in shaping adult physiological responses. Zhang et al. review the developmental endocrinology of oxidative stress at the maternal-fetal interface . They suggest oxidative stress at this site is an important driver of pathology, antioxidant therapy may be the best treatment for “placental diseases”, and an antioxidant lifestyle may help prevent disease. The report thoroughly examines the physiological implications of oxidative stress on the maternal-fetal interface, highlighting the potential ramifications on nutrient transfer, immune regulation, and overall developmental processes. Moreover, it emphasizes the need for continued research endeavors and intervention strategies to mitigate the adverse effects of oxidative stress on this complex interplay, aiming to promote the integrated approach to establishing and maintaining the health of both the expectant mother and the developing fetus. Thyroid autoimmunity is associated with many maternal and neonatal adverse outcomes . In another context of developmental endocrinology in pregnancy, Liu et al. investigate thyroid peroxidase antibodies (TPO-Ab) and their association with first-trimester miscarriage rate/live birth rate in women with unexplained recurrent spontaneous abortion (URSA), which have significant implications for understanding pregnancy outcomes. The findings highlight a higher first-trimester miscarriage rate in TPO-Ab-positive women, particularly in younger subgroups and primary URSA subgroups. While the live birth rate did not exhibit a statistically significant difference between TPO-Ab positive and negative groups, the potential impact of TPO-Ab on pregnancy outcomes, especially in the first trimester, merits further investigation. Acknowledging the study’s limitations, such as its retrospective design, emphasizes the need for larger, prospective randomized studies to confirm the association between TPO-Ab and first-trimester miscarriage rate, particularly in specific subgroups of patients with URSA. Regarding the role of RORα in developmental endocrinology, Rani reviews the fascinating “staggerer mice” story with one of its first roles materializing during embryogenesis, an intricate molecular-endocrine mediated circadian-like regulatory process. Dysfunctional RORα impairs metabolism, osteogenesis, skeletal and smooth muscles, and immunity, and makes RORα a multi-functional protein during embryogenesis. The text discusses the importance of good nutrition for effective embryonic development and the role of essential nutrients in supporting healthy transcriptional systems. RORα also functions in germ cell organization, another aspect of developmental endocrinology . Adrenal development in embryonic and fetal health expands our understanding of the intricate molecular and physiological processes that shape developmental endocrinology . Akkuratova et al. outline a detailed single-cell atlas of chromaffin development, permitting the identification of novel cell populations and establishing nuanced transitions within subpopulations of immature chromaffin cells. The work advances the field of sympatho-adrenal developmental endocrinology. The authors report the discovery of microheterogeneity in developing chromaffin cell populations, the identification of novel markers of adrenergic and noradrenergic populations in developing adrenal glands, and the revelation of new differentiation paths leading to these populations. Additionally, the research emphasized the essential roles of chromaffin cells in fetal survival, the initiation of breathing, and the physiological response to hypoxia. The study’s use of deep single-cell RNA sequencing and trajectory analysis provided valuable insights into the molecular events driving fate choices in Schwann cell precursors and the transient nature of developing chromaffin populations, leading to the identification of previously unknown transient or persisting markers of chromaffin cell subpopulations. Hypogonadotropic hypogonadism leads to absent, partial, or arrested puberty . Zhang et al. provide a comprehensive characterization of Kallmann syndrome and associated genetic variations with the condition. The group provides crucial insights into the genetic and molecular mechanisms underlying this complex disorder, paving the way for precise clinical diagnosis and treatment strategies. The comprehensive study characterized the clinical phenotype and genetic variations in a 14.4-year-old male diagnosed with Kallmann syndrome (KS). Bioinformatics analysis suggested that the IL17RD variant may disrupt fibroblast growth factor signaling by potentially affecting protein phosphorylation and modification. In contrast, the CPEB4 variant appears crucial in affecting olfactory bulb morphogenesis, potentially contributing to the patient’s hyposmia. The study provides valuable insights into the genetic and molecular mechanisms underlying KS. Furthermore, the study broadens the gene expression profile of KS-related pathogenic genes, paving the way for future research in understanding KS pathogenesis. The patient received gonadorelin pump pulse therapy, improving LH, FSH, and T levels. The patient is under ongoing regular follow-up, with follow-up examinations showing noteworthy progress. This study presents significant contributions to the academic understanding of this complex genetic disorder and paves the way for further research in Kallmann syndrome. Single-cell RNA sequencing is an emerging powerful tool to characterize cell subpopulations, circumventing the shortcomings of traditional cell population sequencing . Tirumalasetty et al. provide a comprehensive review of single-cell RNA-sequencing compared to the bulk RNA-seq of rodent and human patients testicular tissues. The team highlights “the cellular heterogeneity, spatial transcriptomics, dynamic gene expression, and cell-to-cell interactions with distinct cell populations within the testes”. The findings have potential implications for future clinical management of male reproductive complications. Liu et al. report a bidirectional cohort study on spermatogenesis and seminal testosterone and offer a potential method to improve the assessment of male infertility and sperm quality. The team concludes measuring testosterone in seminal fluid is more sensitive for judging the presence of local spermatogenesis in nonobstructive azoospermia patients. These studies collectively contribute to the ongoing dialogue surrounding developmental endocrinology, maternal-fetal health, genetic disorders, and reproductive health. They underscore the importance of continued research efforts to unravel the complex interplay between endocrinological processes, environmental factors, and genetic determinants in shaping developmental outcomes. As we navigate the intricate landscape of developmental endocrinology, this wisdom of the body perspective is vital in guiding future research endeavors and clinical interventions to promote maternal and offspring health and well-being. LN: Conceptualization, Investigation, Project administration, Supervision, Visualization, Writing – original draft, Writing – review & editing. MC: Writing – review & editing. CR: Writing – review & editing. HK: Writing – review & editing. |
Photodocumentation in oculoplastic surgery: an up-to-date
overview | 39c1e011-202b-4189-a22d-fed8f7d05402 | 11826790 | Ophthalmology[mh] | The equipment used was as follows: a digital single lens reflex (DSLR) camera model EOS Rebel T6 (Canon, Inc., Tokyo, Japan) with an Advanced Photo System Type C (APS-C) sensor measuring 22.3 mm x 14.9 mm, EF 100 mm f/2.8 Macro USM Lens (Canon, Inc., Tokyo, Japan), a dedicated Canon Speedlite 430EX III-RT (Canon, Inc., Tokyo, Japan) with white diffuser reflector, and a tripod. The images were taken in the medical office of one of the authors (Barbi, JSF), whose roof is painted white and the walls light gray. For full face and periorbital region composition, the camera was used in manual (M) mode, with f/8 aperture, 1/125 shutter speed, ISO 100, dedicated flash used in the through-the-lens (TTL) automatic mode with 75-degree tilt, and white diffuser reflector. The white balance (WB) was set to “flash” mode, and the focus automatically centered in the patient´s eyes. In two particular situations, settings were modified: in cases of eyelid ptosis, images were taken using the flash from the front, directed toward the patient’s face and diaphragm closing by 1 stop (f/11), and in cases of macro photographs, the flash was directed backward toward a silver diffuser reflector. The patient was seated on a swivel stool without wheels. A floor mat with markings was used to signalize the front (anteroposterior), oblique (45 o ), and profile (90 o ) positions. The camera was positioned on a tripod, and the height was adjusted so that the camera lens was aligned with the patient’s eyes. The external flash was directed at the ceiling and a white diffuser reflector attached to the body of the flash . For full face-framing in the primary position gaze (PPG), the camera’s thirds grid was used so that the upper horizontal line passed through the pupils and the apex of the patient’s ears. For framing the face in the oblique position, the patient was requested to rotate the entire body to 45 ° until the tip of the nose aligned with the malar eminence, and the ear apex aligned with the lateral eyelid canthus. For lateral view framing, it is important to align the ear apex to the lateral eyelid canthus so that only the same side of Cupid’s bow can be seen . With an APS-C sensor camera and a 100-mm macro lens, the photographer’s was 3 m away from the patient. For periorbital framing, the working distance was 1 m. To ensure and standardize this distance, the patient and photographer were placed on marks on the floor. For macro photographs (e.g., eyelid tumors), the camera was set to manual focus, using a 0.38-m focus distance and 1:1 magnification ratio. In this setting, the photographer had to approach or move away from the lesion to be photographed to obtain a sharp focus. At the start of each set of photographs, the patient was asked to hold an 18% gray card next to his/her face, and a picture was taken for WB correction, followed by the full set of photographs. Photographs were taken in RAW format and processed in postproduction using Adobe Lightroom ® . All patients signed consent form for the use of their photographs. The included articles showed some common aspects: 10 of 19 articles suggested the use of a telelens (60-110 mm), and only 3 of them described the working distance . Background might vary between blue, white, black, and gray tones; however, in more than half of the included articles, the blue background was cited as preferential or alternative(1-3,6-13). Most of the articles(1-3,6,7,9,10,13-15) have suggested illumination with studio lights, four suggested speedlight only (ring flash and dedicated TTL flashes or unspecified speedlight , and two did not mention the illumination source. Only four articles mentioned the three parameters used: ISO, aperture, and shutter speed . More than half of the included articles did not mention these parameters . Most of the included articles cited the Frankfurt horizontal plane for the head position and positioning angles such as frontal, lateral, and oblique views . shows the results of the protocol adopted in this study for photographic documentation in oculoplasty. Illustrative photographs are shown to demonstrate the result of using this protocol. We considered special angles of view and some particularities in oculoplastic photography. In pre-and post-blepharoplasty or eyelid surgery photographs, we took photographs in the PPG, right and left oblique, and lateral views . In cases of ptosis, we photographed the patient in PPG, supraversion, and infraversion. We also used the frontal flash to demonstrate the margin reflex distance, as this reference is the main comparison parameter for the position of the upper eyelid in the postoperative period . We recorded the results of the 10% phenylephrine test , when performed, by placing a mark (white eye pencil or piece of white micropore) above the ipsilateral eyebrow where the drop was instilled. This way, when reviewing the photographs, it was clear which is the before photograph and the after-the-test photograph. In orbit photography, we recorded all gazes for the assessment and documentation of motility/restrictions of extraocular muscles. In addition, to demonstrate the anterior projection of the eyeball and proptosis, we took photographs with the camera tilted up and located below the patient’s chin, and we asked the patient to raise the chin . In the treatment of dynamic wrinkles and expression lines, we asked the patient to contract the facial muscle groups to be treated, as shown in , in the preand post-botulinum toxin treatment. We took macrophotographs with an external flash bent backward and bounced on a sheet of polystyrene or a white or silver diffuser reflector, as shown in . By using the protocol described in this article, we collected data and photographs from 324 patients and obtained consistent results with optimal standardization of facial photographs before and after the surgical/cosmetic procedures. There are three types of photographers: amateurs, professionals, and functional . Physicians who practice oculoplastic photography fall into the category of “functional photographers,” i.e., those who are not professional photographers but need to have the minimum basic knowledge of photographic recording for their medical practice. Photodocumentation is valuable for various purposes, such as medical record keeping, insurance and legal situations, creation of models for preoperative planning and clarifying it to the patient, assessment for self-improvement, and medical education, including teaching residents, sharing data with colleagues, and preparing presentations and publications . Notably, even though the 19 articles included proposed standardization of face photographs, only two reported information of all the characteristics analyzed. Guided by the literature review and a self-developed standardization protocol used by one of the authors (Barbi, JSF), we proposed a protocol for oculoplastic photodocumentation, which focuses on basic technical knowledge on photography, camera parameters, illumination, background, head position, and image size. DSLR cameras were recommended in several articles for their excellent cost-benefit ratio. They can offer the benefits of interchangeable lenses; thus, one can select the appropriate focal length, have complete control over camera settings, and obtain good image quality. Full-frame (24 mm x 35 mm) sensors are larger than APS-C sensors and can produce better image quality and less digital noise. However, full-frame cameras are generally more expensive, and APS-C sensor cameras are good enough for clinical photography. The focal length is quoted in millimeters. With an APS-C sensor camera, lenses are regarded as “wide angle” when the focal length is <35 mm. This type of lenses delivers a wider field of view and is ideal for panoramic and landscape photography. The so-called “normal” lenses provide a viewing angle similar to that of the human eye, which corresponds to 35-mm lenses in cameras with APS-C sensor. Telephoto lenses provide a smaller viewing angle and magnification of the image and are ideal for face and close-up photographs . Such lenses avoid the face distortions that can occur with the approach of normal or wide-angle lenses . The lenses can be fixed focal length (prime lenses) or zoom lenses with adjustable focal length. Fixed lenses are those with a single focal length, and to change the frame, the photographer must move back and forth. Given the same objective, zoom lenses allow several focal lengths that can be adjusted manually, there is no need for the photographer to move just to change the frame. In medical practice, fixed lenses are recommended for two reasons: they allow better image quality and easier standardization. The authors of this article recommend a 100-mm macro lens, according to personal experience and published articles . The authors recommend using this type of lenses because they allow greater distance between the photographer and the patient, which is favorable for lighting, as it bounces back, has a longer path to reach the patient, and delivers a softer and broader illumination. A dedicated macro lens is preferred, as it allows closer focusing on a 1:1 magnification ratio of the patient . This feature is advantageous for photographing small lesions at large magnification. Cameras interact with light by basically three components in the camera settings, the so-called light photographic triangle: diaphragm or aperture (f-number), shutter speed, and ISO . In an easy-to-understand and didactic way, we can compare these parameters with the physiology of the eye. The diaphragm of a camera can be compared to the pupil; with a larger aperture, more light enters and the shallower the depth of field and vice versa. The shutter speed could be compared with the act of “blinking” It works as a window that allows light to enter the eye depending on how long it is open or how fast it closes. The ISO is a measure of the sensitivity of the sensor to light, equivalent to the retina, and can be adjusted according to environmental brightness. In a very bright environment, the ISO should be reduced and vice versa. These three parameters can be adjusted automatically, semi-automatically, or manually on DSLR cameras. For correct standardization, the photographer should always choose the manual mode and determine these values for consistency in the photographs before and after surgical or cosmetic treatments. A camera’s aperture is quantified by the “f” number, which is a ratio of the lens focal length and the diameter of the aperture. The aperture of the diaphragm in a 100-mm f/2.8 macro lens ranges from 2.8 to 22; the smaller the value, the more open the diaphragm will be and the smaller the depth of field. In medical practice, it is recommended to work with apertures between f/5.6 and f/8 for two reasons: (1) to have sufficient depth of field to make the whole face of the patient sharp (very wide diaphragms can leave the eyes focused and the tip of the nose and ears blurred), and (2) in a 100-mm macro lens, these “f” values allow for better image quality (in very open or very closed diaphragms, there is a slight loss in image quality). The shutter speed is a measure of how long the camera’s shutter blades are open to expose the sensor to the light, and it is measured in seconds or fractions of a second. In the present study, the shutter speed of the camera ranges from 30 s (slowest) to 1/4000 seconds (fastest). When the flash is attached to the camera shoe, this command is limited to a maximum speed of 1/125 (corresponding to the maximum synchronization speed of the flash). At very high speeds, the second shutter curtain may close before the light hits the entire sensor, which generates a dark band on the photographs. To prevent this, the camera limits fast shutter speeds when the flash is attached. Since the ambient light in an ophthalmology outpatient clinic is not consistent in brightness and color temperature , the maximum flash sync speed should be used to exclude ambient light interference in the photographs. Thus, regardless of the time of the day, sunlight in windows, and ceiling lights (on or off), the photograph will have light consistency because the camera will record the light coming only from the flash. For the same reason mentioned above, the photographer should work with lower ISO values, and 100 is suggested. The ISO represents the sensitivity of the sensor to light, and lower values are desired so that there is minimal or no capture of ambient light. Although most articles encourage the use of studio lights , the authors suggest using a unique light source (TTL speedlight) (1) to simplify the photography equipment as studio lights may not be viable in terms of space in ophthalmology offices and 2) because a speedlight works in very effectively without requiring additional lights for full face and periorbital region photographs. In addition to the photographic triangle, the power level of the flash is the fourth element during flashed photographs. This parameter is set on the flash itself and can be placed in manual or automatic mode. This study agrees with Ong et al. , who suggested the use of the flash in automatic mode, that is, the camera will determine the power of the flash using the TTL metering function. In a TTL system, the camera fires a quick pre-flash to determine the amount of light needed to illuminate the subject followed by the correct flash output to achieve correct exposure. If photographs are always taken at the same working distance using the same lens, the photographs will have good consistency in this parameter even when working with the flash in automatic mode. However, as most of the oculoplastic surgeons are not using a studio, factors, such as furniture or patient clothing color, may cause small changes in automatic reading. To avoid handling the flash in manual mode, the authors suggest using an 18% gray card in the first photo in a series. The card helps with the WB in the editing software in postproduction, as it serves as a neutral color reference for adjustment of the color temperature of the photographs . Another important suggestion is the use of the flash directed toward the ceiling, thus using the bounced light to achieve smoother and broader lighting and greater idea of three-dimensionality. The frontal flash results in harsh light, which leaves the photo “flat”, that is, without any portrayal of relief and contours that are fundamental for medical photodocumentation . In addition, a white diffuser reflector attached to the flash is also recommended to direct part of the light in a straight line, which will reach the patient’s face much more smoothly, as it is bounced, not direct light. This reflector is used to eliminate shadows that can occur in the periorbital region when the lighting is exclusively done from “top to bottom” when using only light reflection from the ceiling. This is useful in men who have a very prominent forehead. These “shadows” in the periocular region can be eliminated or smoothened. This this problem can be also solved by the use of a small rectangular reflecting panel positioned horizontally against the patient’s chest, just under the collarbone, outside the framing, just to reflect the light from bottom to top, minimizing these shadows . For macrophotographs, the current literature suggests the use of a ring or twin flash, specific for this type of photographs. However, as this study aimed to simplify the photographic apparatus without loss of quality and consistency of images, the authors used the external flash aimed at a white reflector (which can be something as simple as a white card) or a 30-cm silver reflector located behind the photographer. We chose this method because the photographer-patient distance for macrophotography is approximately 0.38 m, which is very close. The use of the front flash will cause the flattening of lesions, removing the same important characteristics as shadows, and raised edges. Reflecting the light in the ceiling is not also a good option, as in macrophotography, the area framed reflecting light is very small and needs greater illumination than the light reflected from the ceiling, leaving the image dark. Directing the flash backward into a reflector allows the light to smoothly illuminate the lesions, without erasing its contour and relief, which should be demonstrated in the preoperative record or clinical follow-up. WB is a process of removing the unnatural color cast from an image . Different light sources can have different color temperatures. Fluorescent lamps can result in greenish images, just as incandescent sources can result in orange skin tones . In medical photography using the proposed protocol, since there will always be a flash, it is recommended to leave the WB in “flash” mode. As previously said, if the flash perhaps “misses” the light reading of the scene, the WB can be corrected by using an 18% gray card in the editing software. The need for a uniform background is well established, but the color suggested for this background varies between authors. Several articles of medical photography have suggested the use of a blue background because it is opposed to the yellow shade of the skin of most patients, generating a composition pleasant to the human eye. Some authors suggest a black background to eliminate shadows that could be generated by a white background, depending on the incident light. However, a black background may present challenges in patients with dark hair and cannot provide subjectbackground separation unless another source of light is used, making office photography more complex . The simplest and cheapest way is to paint the wall with a color close to 18% gray based on established photography books . This tone is intermediate between white and absolute black, and it helps with the photometer readings built in DSLR cameras. In addition, due to its “absence of color”, it prevents unwanted reflections of color by the flash on the patient’s face. The suggestion was to paint the office ceiling white, as it will be used to bounce the light from the dedicated flash. Moreover, matte paint should be used to avoid over-reflection of the light . The recommended positions for full-face photographs are anteroposterior, right anterior oblique, left anterior oblique, right profile, and left profile(5,6,8,9,11-16,18,26,28,30). Discrete marks on the floor are also essential to determine the distance between the photographer and the patient . The use of the thirds grid on the camera’s screen is highly recommended to help the photographer align the camera precisely . The authors agree with the statements of Rhee regarding a frame reference, i.e., a horizontal line that passes through the center of the pupil or eyelid canthus and the apex of the ear, because this reference can be used in frontal, oblique, and profile pictures. More than half of the included articles used Frankfurt’s horizontal plane (a line that passes through the external auditory canal and the inferior orbital rim) as a reference for head positioning . Nevertheless, it is a radiological reference, and it can be difficult to reproduce in photography. To frame the face in the oblique position, the patient rotated his/her entire body, facing markings on the floor corresponding to 45 ° on the right and then on the left . These markings correspond to the patient rotating his/her body until the patient’s tip of the nose aligns with the malar eminence and the ear apex with the lateral eyelid canthus. For lateral views, the ear apex should be aligned to the lateral eyelid canthus and only the ipsilateral Cupid’s bow should be seen. Although photographs of the face are recommended in a vertical orientation , the authors recommend the horizontal orientation for the ease of operation with the camera and flash. In orbital diseases, the apparent exophthalmos and enophthalmos must be documented. Thus, the authors recommend the “worm view” which is taken with the patient in neck extension and the camera viewing from below; the tip of the nose aligns with the glabella, right in the middle of the eyebrows, the focus is kept in the eyes, and both eyes appear in the frame. Regarding the format of the photographs, the suggestion was to shoot in RAW format , which is a type of file without processing in which color information, WB, and other parameters will be edited later in postproduction using the Adobe Lightroom® for editing, converting RAW to JPEG files (without loss of quality since the files will not be compressed), and storing them in the cloud, which allows for device synchronization and therefore easy sharing or exchange of files. A narrative review on photodocumentation in facial surgery was performed, and a self-developed, guided by the literature review, standardized protocol for facial photographic registration was described. Preand postphotographs of surgical and cosmetic procedures were collected from 324 patients using this protocol, and consistent results were obtained with optimal standardization of facial photographs. This protocol can be easily adapted to any oculoplastic surgeon’s practice, without the need to set up a studio in the office or spend on unnecessary extra photographic equipment. Essentially, there is not a gold standard protocol in facial photodocumentation. However, there is agreement on the importance of standardization and reduction in variables as much as possible to achieve consistency across photographs and register the patient’s condition and clinical evolution as accurately as possible. |
How Will Nanomedicine Revolutionize Future Dentistry and Periodontal Therapy? | ed2d0e7a-ba28-46ba-b24f-77ecbeb99197 | 11765319 | Dentistry[mh] | Periodontal disease is a chronic inflammation that affects the supporting tissues of the teeth, including the gums, the periodontal ligament, and the alveolar bone. According to the World Health Organization, severe periodontal disease is estimated to affect approximately 19% of the world’s adult population, accounting for over 1 billion cases worldwide . The etiology of periodontal disease is a local inflammatory process driven by subgingival bacteria . Commensal bacteria in the plaque do not cause tissue destruction if controlled by oral hygiene and supported by a healthy diet . Current evidence of the etiology of periodontal disease is the concurrent presence of bacterial dysbiosis in individuals with a dysregulation of the systemic inflammatory response. Dysbiosis is associated with a dysregulated immune response, a decrease in anti-inflammatory bacteria, and an increase in opportunistic or pathogenic species favored by metabolic alterations of the microenvironment. However, the presence of persistent inflammophilic bacterial species also supports the chronicity of damage with the destruction of tooth-supporting tissues and impaired resolution processes . This inflammatory condition affecting the supporting structures of the teeth is often associated with low-grade local or systemic inflammation, which allows the distant transfer of oral bacteria or their components to other distal tissues . These events support bacterial dysbiosis, the subsequent chronicity of the pathology, and the severity of other systemic chronic conditions . It has been shown that there is a dual interaction between periodontal disease and diabetes. Periodontitis has a significant impact on diabetes control, incidence, and complications . The successful treatment of periodontal disease has also led to a significant reduction in glycated hemoglobin . Current therapeutic approaches to periodontitis range from behavioral changes to surgical interventions . The first objective is removing subgingival plaque and controlling local and systemic inflammation, followed by tissue regeneration. However, these goals are not easily achieved in most adult patients with aggressive periodontitis . Indeed, the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) emphasize the importance of a multidisciplinary approach that considers both oral and systemic pathology, reflecting a more integrated vision of patient care . New therapeutic opportunities may be provided by developing new materials that are functional and adaptable to the individual patient’s pathological condition to avoid the limitations of traditional treatments such as mechanical instrumentation, surgery, and systemic or local antibiotics. The new frontier of these treatments lies in nanomedicine, nanotechnology, and nanosystems, which offer more targeted, efficient, and personalized therapeutic approaches. While nanomedicine focuses on applying nanotechnology to medical treatments, nanotechnology itself encompasses a broader field of science and engineering aimed at manipulating matter at the nanoscale. Nanosystems, on the other hand, refer to integrated, functional structures made from nanoscale components designed to perform specific tasks, often combining the principles of both nanomedicine and nanotechnology for advanced medical applications . The application of nanomedicine and nanotechnologies has been suggested to be a part of the therapeutic arsenal for the treatment of periodontal diseases, mainly periodontitis, with the goal of delivering a sufficient concentration of active molecules at the targeted site and avoiding its distribution in non-specific tissues, consequently decreasing the risk of side effects . However, further research, regulatory approval, and safety evaluations are needed before these technologies can be widely implemented in clinical practice. Although several studies have investigated the characteristics and the effects of nanoparticles (NPs) for treating periodontal diseases, the majority of investigations are performed on in vitro models. The few in vivo experiments have shown that nanomedicine holds great promise for the treatment of periodontal diseases, with positive results in drug delivery, bacterial infection control, and tissue regeneration. While these studies have demonstrated efficacy in animal models, further research is required to fully understand the safety, long-term effects, and clinical applicability of nanomedicine in human periodontal disease treatment. The next steps will likely involve clinical trials to confirm the results seen in animal studies and to determine the most effective nanomedicine formulations for periodontal care . This study summarized nanotechnology-adopting strategies for diagnosing and treating periodontitis in terms of antibacterial therapy, anti-inflammatory therapy, and tissue regeneration . To effectively treat periodontitis, clinicians must understand not only the clinical signs but also the status, severity, and activity level of disease. Accurate and prompt diagnosis of periodontitis enables clinicians to identify whether the disease is active, stable, or progressing, which directly influences treatment strategy. Examining the full-mouth bleeding score, full-mouth plaque score, recessions, movement, migration, probing depth, clinical attachment level, and bleeding on probing (BoP) are all clinical evaluations that provide a comprehensive understanding of the patient’s periodontal health . However, these methods alone cannot identify the presence of specific pathogens causing the infection, which is also crucial for developing targeted treatments. Nanotechnology also offers advances in early diagnosis and monitoring of periodontal diseases through biosensors and imaging techniques . Nanomaterials can be functionalized with specific molecules (such as antibodies or peptides) that bind to disease markers or pathogens. This allows for highly targeted imaging, which can dramatically improve diagnostic accuracy, especially for detecting specific bacterial infections . Peptide-functionalized nanoparticles could improve the specificity of diagnostic imaging agents, making it possible to detect pathogens at very low concentrations in the oral cavity, which would otherwise be missed by traditional imaging techniques. Targeted nanoparticles could be used in fluorescence or optical coherence tomography (OCT) to selectively highlight regions of interest, such as areas of early decay, gum disease, or tumors . 2.1. Nanotechnology-Enhanced Imaging The continuous evolution of imaging technologies in dentistry has dramatically improved diagnostic precision, reduced treatment times, and enhanced patient care. From 3D imaging systems like cone-beam computed tomography (CBCT) and intra-oral scanners to innovative artificial intelligence (AI)-driven analysis tools, these advances are reshaping how dental professionals diagnose and treat a variety of oral conditions. As these technologies become more accessible and affordable, they are likely to become integral to routine dental practice, driving further improvements in both patient outcomes and operational efficiency . In imaging technologies, nanomaterials are increasingly employed for enhancing resolution, sensitivity, and the overall effectiveness of diagnostic tools. The innovative use of nanotechnology aims to reduce radiation exposure while producing sharp, detailed pictures, improving dental imaging safety and accuracy. The NPs work at the molecular or nanoscale level, providing unique properties that are difficult to achieve with conventional materials. High atomic number (high-Z)-based nanoparticles such as gadolinium (Gd), ytterbium (Yb), hafnium (Hf), tantalum (Ta), tungsten (W), rhenium (Re), gold (Au), and bismuth (Bi) are being explored as potential contrast agents to significantly enhance the visibility of tissues on X-rays and computed tomography (CT) scans. High-Z elements, due to their high atomic number, absorb X-rays more effectively than other elements commonly found in the body such as carbon, oxygen, nitrogen, and calcium . This property leads to increased X-ray attenuation, providing greater contrast between the high-Z elements and the surrounding tissues . Compared to iodinated CT contrast agents, AuNPs have a significantly higher X-ray attenuation. Iodinated agents are excreted rapidly from the body, resulting in a short imaging time . In contrast, AuNPs have a K-edge energy of 80.7 keV, which allows AuNPs to outperform iodinated contrast agents in terms of image quality at the same concentration. AuNPs are less toxic to kidneys than iodinated contrast agents, and thus could potentially offer a safer option for patients, particularly those with renal impairments . Ostadhossein F. et al. described a novel approach combining polymeric silane and hafnium oxide (HfO 2 ) nanoparticles (Hf PS NPs) for both diagnostic imaging and therapeutic applications, specifically targeting oral pathogens. Experiments conducted on human tooth samples outside of a living organism demonstrated a significant difference in X-ray absorption between the nanoparticles and the tooth material. A high-affinity, pathogen-selective peptide was used to guide the nanoparticles specifically to the pathogen, allowing for molecularly targeted X-ray imaging. This could provide a more precise method of identifying and localizing the bacterial pathogen in dental applications, especially in the context of caries . Nanomaterials like quantum dots or carbon-based nanomaterials can be designed to fluoresce at specific wavelengths when exposed to light. These materials can be incorporated into fluorescence-based imaging technologies to enable the detection of early-stage dental issues that may not be visible through conventional methods. The unique characteristics of carbon dots (CDs), including their fluorescence properties, low toxicity, and the ability to modify their surface chemistry for selective interactions with bacteria, make them promising candidates for improving the diagnostic and therapeutic management of bacterial infections. Yang J. et al. constructed quaternized CDs that exhibited bacterial-contact-enhanced fluorescence emission. This modification enhanced the selectivity and sensitivity of CDs for detecting Gram-positive bacteria. The quaternized CDs can differentiate between Gram-positive and Gram-negative bacteria using fluorescence signals, facilitating faster diagnosis of bacterial infections . Liu S. et al. focused on CDs doped with nitrogen (N) and chlorine (Cl) elements. These positively charged N, Cl-codoped CDs were found to exhibit high selectivity for Gram-positive bacteria through selective fluorescence imaging, and antibacterial effects. The positively charged surface of the N, Cl-codoped CDs likely facilitates strong electrostatic interactions with the negatively charged membranes of Gram-positive bacteria, contributing to both their selective recognition and bactericidal action . 2.2. Nano-Biosensors The combination of nanomaterials and biosensor technologies offers a powerful and flexible approach for the detection and quantification of a wide range of biological and chemical substances, enabling more efficient diagnostic tools for clinical and environmental applications. Metal nanoparticles (MNPs), carbon nanotubes, and quantum dots (QDs) are employed for enhancing the sensitivity and precision of electrochemical biosensors. These materials are integral to the performance of nanosensors, which have gained increasing attention over the last few decades, surpassing other analytical techniques like chromatography and spectrophotometry . Electrochemical biosensors are especially favored for point-of-care (POC) diagnostics due to their sensitivity, rapid response time, and practicality, making them well-suited for portable, on-site applications to detect a wide range of analytes, including pharmaceuticals, proteins, biomarkers, and pathogens . MNPs conjugated with antibodies are a powerful tool in immunomagnetic separation (IMS) for biosensor applications. This technique combines the specificity of antibodies for target antigens with the magnetic properties of metal nanoparticles to efficiently capture and separate biomolecules or cells of interest from complex biological samples . Ma D. et al. developed a novel, easy-to-use, low-cost detection platform for monitoring dental health, specifically targeting the detection of tooth lesions caused by dental caries and periodontal diseases. The platform is a wearable mouthguard made of a composite material consisting of gold–silver nanorods (Au@Ag NRs) and poly(dimethylsiloxane) (PDMS), which can visualize the presence of dental lesions through a color change at the affected sites. The color change occurs in response to the presence of hydrogen sulfide (H 2 S) gas, which is produced by bacterial decay at the lesion sites in the mouth. In addition to its sensing capabilities, the Au@Ag NRs–PDMS mouthguard exhibits several desirable characteristics such as mechanical properties for maintaining its integrity during wear, resistance to degradation from chemical exposure, making it suitable for the harsh oral environment, and high biocompatibility . Mannoor M.S. et al. created a highly sensitive, selective, and non-invasive sensing system including graphene nanosensors able to interface with biomaterials such as tooth enamel. The system can detect single-cell bacterial infections and provide wireless remote monitoring, making it a promising tool for healthcare applications. In this system, the graphene network is integrated onto the biomaterial interface, allowing it to conform closely to biological surfaces. This intimate contact ensures that the sensor can pick up even very subtle changes in the environment, such as the presence of bacteria. The graphene nanosensors are transferred to the biological surfaces using a water-soluble silk fibroin platform, which provides a biocompatible and flexible base. Silk fibroin serves as a medium that enables the graphene to be transferred seamlessly, preserving its sensitive properties while ensuring that the system remains biologically safe and able to interact with living tissues. The sensor integrates antimicrobial peptides (AMPs), which enable broadly selective biorecognition. These peptides are tailored to specifically target bacterial cells, enhancing the sensor’s specificity for detecting single-cell bacteria. The combination of graphene, AMPs, and the resonant circuit results in a highly sensitive, selective, and wireless sensor that can be used for a wide variety of applications, from detecting bacteria in oral cavities to monitoring environmental pollutants or tracking health conditions in real time . The continuous evolution of imaging technologies in dentistry has dramatically improved diagnostic precision, reduced treatment times, and enhanced patient care. From 3D imaging systems like cone-beam computed tomography (CBCT) and intra-oral scanners to innovative artificial intelligence (AI)-driven analysis tools, these advances are reshaping how dental professionals diagnose and treat a variety of oral conditions. As these technologies become more accessible and affordable, they are likely to become integral to routine dental practice, driving further improvements in both patient outcomes and operational efficiency . In imaging technologies, nanomaterials are increasingly employed for enhancing resolution, sensitivity, and the overall effectiveness of diagnostic tools. The innovative use of nanotechnology aims to reduce radiation exposure while producing sharp, detailed pictures, improving dental imaging safety and accuracy. The NPs work at the molecular or nanoscale level, providing unique properties that are difficult to achieve with conventional materials. High atomic number (high-Z)-based nanoparticles such as gadolinium (Gd), ytterbium (Yb), hafnium (Hf), tantalum (Ta), tungsten (W), rhenium (Re), gold (Au), and bismuth (Bi) are being explored as potential contrast agents to significantly enhance the visibility of tissues on X-rays and computed tomography (CT) scans. High-Z elements, due to their high atomic number, absorb X-rays more effectively than other elements commonly found in the body such as carbon, oxygen, nitrogen, and calcium . This property leads to increased X-ray attenuation, providing greater contrast between the high-Z elements and the surrounding tissues . Compared to iodinated CT contrast agents, AuNPs have a significantly higher X-ray attenuation. Iodinated agents are excreted rapidly from the body, resulting in a short imaging time . In contrast, AuNPs have a K-edge energy of 80.7 keV, which allows AuNPs to outperform iodinated contrast agents in terms of image quality at the same concentration. AuNPs are less toxic to kidneys than iodinated contrast agents, and thus could potentially offer a safer option for patients, particularly those with renal impairments . Ostadhossein F. et al. described a novel approach combining polymeric silane and hafnium oxide (HfO 2 ) nanoparticles (Hf PS NPs) for both diagnostic imaging and therapeutic applications, specifically targeting oral pathogens. Experiments conducted on human tooth samples outside of a living organism demonstrated a significant difference in X-ray absorption between the nanoparticles and the tooth material. A high-affinity, pathogen-selective peptide was used to guide the nanoparticles specifically to the pathogen, allowing for molecularly targeted X-ray imaging. This could provide a more precise method of identifying and localizing the bacterial pathogen in dental applications, especially in the context of caries . Nanomaterials like quantum dots or carbon-based nanomaterials can be designed to fluoresce at specific wavelengths when exposed to light. These materials can be incorporated into fluorescence-based imaging technologies to enable the detection of early-stage dental issues that may not be visible through conventional methods. The unique characteristics of carbon dots (CDs), including their fluorescence properties, low toxicity, and the ability to modify their surface chemistry for selective interactions with bacteria, make them promising candidates for improving the diagnostic and therapeutic management of bacterial infections. Yang J. et al. constructed quaternized CDs that exhibited bacterial-contact-enhanced fluorescence emission. This modification enhanced the selectivity and sensitivity of CDs for detecting Gram-positive bacteria. The quaternized CDs can differentiate between Gram-positive and Gram-negative bacteria using fluorescence signals, facilitating faster diagnosis of bacterial infections . Liu S. et al. focused on CDs doped with nitrogen (N) and chlorine (Cl) elements. These positively charged N, Cl-codoped CDs were found to exhibit high selectivity for Gram-positive bacteria through selective fluorescence imaging, and antibacterial effects. The positively charged surface of the N, Cl-codoped CDs likely facilitates strong electrostatic interactions with the negatively charged membranes of Gram-positive bacteria, contributing to both their selective recognition and bactericidal action . The combination of nanomaterials and biosensor technologies offers a powerful and flexible approach for the detection and quantification of a wide range of biological and chemical substances, enabling more efficient diagnostic tools for clinical and environmental applications. Metal nanoparticles (MNPs), carbon nanotubes, and quantum dots (QDs) are employed for enhancing the sensitivity and precision of electrochemical biosensors. These materials are integral to the performance of nanosensors, which have gained increasing attention over the last few decades, surpassing other analytical techniques like chromatography and spectrophotometry . Electrochemical biosensors are especially favored for point-of-care (POC) diagnostics due to their sensitivity, rapid response time, and practicality, making them well-suited for portable, on-site applications to detect a wide range of analytes, including pharmaceuticals, proteins, biomarkers, and pathogens . MNPs conjugated with antibodies are a powerful tool in immunomagnetic separation (IMS) for biosensor applications. This technique combines the specificity of antibodies for target antigens with the magnetic properties of metal nanoparticles to efficiently capture and separate biomolecules or cells of interest from complex biological samples . Ma D. et al. developed a novel, easy-to-use, low-cost detection platform for monitoring dental health, specifically targeting the detection of tooth lesions caused by dental caries and periodontal diseases. The platform is a wearable mouthguard made of a composite material consisting of gold–silver nanorods (Au@Ag NRs) and poly(dimethylsiloxane) (PDMS), which can visualize the presence of dental lesions through a color change at the affected sites. The color change occurs in response to the presence of hydrogen sulfide (H 2 S) gas, which is produced by bacterial decay at the lesion sites in the mouth. In addition to its sensing capabilities, the Au@Ag NRs–PDMS mouthguard exhibits several desirable characteristics such as mechanical properties for maintaining its integrity during wear, resistance to degradation from chemical exposure, making it suitable for the harsh oral environment, and high biocompatibility . Mannoor M.S. et al. created a highly sensitive, selective, and non-invasive sensing system including graphene nanosensors able to interface with biomaterials such as tooth enamel. The system can detect single-cell bacterial infections and provide wireless remote monitoring, making it a promising tool for healthcare applications. In this system, the graphene network is integrated onto the biomaterial interface, allowing it to conform closely to biological surfaces. This intimate contact ensures that the sensor can pick up even very subtle changes in the environment, such as the presence of bacteria. The graphene nanosensors are transferred to the biological surfaces using a water-soluble silk fibroin platform, which provides a biocompatible and flexible base. Silk fibroin serves as a medium that enables the graphene to be transferred seamlessly, preserving its sensitive properties while ensuring that the system remains biologically safe and able to interact with living tissues. The sensor integrates antimicrobial peptides (AMPs), which enable broadly selective biorecognition. These peptides are tailored to specifically target bacterial cells, enhancing the sensor’s specificity for detecting single-cell bacteria. The combination of graphene, AMPs, and the resonant circuit results in a highly sensitive, selective, and wireless sensor that can be used for a wide variety of applications, from detecting bacteria in oral cavities to monitoring environmental pollutants or tracking health conditions in real time . The first aim of treating periodontal disease is to eliminate subgingival debris and plaque to reduce the bacterial load. Although the administration of antibiotics represents a therapeutic strategy for treating the acute manifestation of periodontitis, they fail to support the long-term health of periodontal tissues, and they do not provide tissue regeneration. Moreover, the emergence of antibiotic resistance in bacteria is a significant global health challenge . Nanomaterials are emerging as a promising approach to address antibiotic resistance due to their ability to evade existing resistance mechanisms adopted by bacteria . Resistance in bacteria can be intrinsic, which refers to the inherent ability of bacteria to resist the effects of antibiotics due to their natural characteristics or structural features . Gram-negative bacteria possess an outer membrane that limits the entry of certain antibiotics, making them inherently resistant to many drugs. Many bacteria have evolved efflux pumps that can actively transport a range of antimicrobial agents out of the cell, decreasing drug accumulation and effectiveness . Pseudomonas aeruginosa is intrinsically antibiotic-resistant due to its impermeable outer membrane and efficient efflux pumps . On the other hand, extrinsic resistance refers to the acquired ability of bacteria to resist antibiotics. The development of resistance is a complex process influenced by genetic, environmental, and selective factors . Spontaneous mutations in target genes can lead to changes that prevent antibiotic binding, such as alterations in ribosomal RNA or penicillin-binding proteins. Bacteria can acquire resistance genes from other bacteria through mechanisms like transformation, transduction, or conjugation, leading to the spread of resistance traits. Staphylococcus aureus can acquire methicillin resistance (MRSA) through horizontal gene transfer of the mecA gene . Enterobacteriaceae can develop resistance to carbapenems by acquiring carbapenemase genes . Some bacteria can produce enzymes that can inactivate antibiotics (e.g., beta-lactamases that break down penicillin) . Differences in structure lead to variations in resistance mechanisms between Gram-negative and Gram-positive bacteria. Gram-negative bacteria usually employ all mechanisms: reducing drug uptake, altering drug targets, inactivating drugs, and actively expelling drugs. Gram-positive bacteria less frequently employ reduced drug uptake due to the absence of an LPS outer membrane and lack specific efflux capabilities . In bacterial infections, biofilms play a critical role, providing a protective environment that shields bacteria from the action of antibiotics. The cells within biofilms are often in a dormant state, making them less susceptible to treatments . Nanoparticles are being explored as alternatives to traditional antibiotics due to their unique properties and mechanisms of action. Nanomaterials can be highly effective in controlling infection due to their small size, enhanced surface area, shape, and ability to deliver therapeutic agents directly to the infection site . Various types of nanoparticles employ multiple mechanisms simultaneously to fight microbes, such as nitric-oxide-releasing nanoparticles (NO-NPs), chitosan-containing nanoparticles (chitosan-NPs), and metal-containing nanoparticles (metal-NPs) . This multifaceted approach to antimicrobial action hinders the microbes from developing resistance. 3.1. Nanoparticle–Membrane Interaction The interaction between the nanoparticle surface and the bacterial membrane is the starting point for the antimicrobial action of NPs. Their small size allows nanoparticles to penetrate biological barriers effectively. Metal-NPs easily bypass Gram-negative bacteria’s lipopolysaccharide (LPS), through their channel proteins . Yang Y. et al. demonstrated a size-dependent effect of gold nanoparticles (AuNPs) on bacterial LPS in promoting neutrophil uptake. Smaller (10 nm) AuNPs promoted the response of neutrophils more than larger (40 and 100 nm) AuNPs . It is well known that smaller nanoparticles have a larger surface-area-to-volume ratio. This means that more of the nanoparticle’s surface is exposed to the surrounding environment . Results from various studies have showed how the size of GO sheets impacts the effectiveness of their antimicrobial properties in different contexts . Larger GO sheets have a greater capacity to wrap around bacterial cells, isolating them from their environment and inhibiting their growth in suspension On the other hand, their interaction with bacteria differs when GO sheets are immobilized on a surface. In this case, smaller GO sheets may exhibit greater antimicrobial activity due to two main factors. Smaller sheets have a higher surface-to-volume ratio, leading to more potential interaction sites with bacterial cells. Smaller sheets have higher defect density, which can increase their reactivity and potentially enhance their antimicrobial properties through oxidative mechanisms . Another relevant toxicity factor responsible for size-dependent antibacterial activity is that smaller nanoparticles dissolve more quickly than larger ones. The rapid dissolution leads to a more rapid release of metal ions . Skandalis N. et al. observed through scanning electron microscopy (SEM) that smaller (40 nm) silver nanoparticles (AgNPs) induced stronger membrane damage in E. coli after 10 h than larger ones (58 nm) . Recently, Zhang Y. et al. proposed the ultra-small gold nanoclusters (AuNCs) composed of 25 gold atoms and 18 thiolate ligands. High-resolution transmission electron microscopy (TEM) showed that AuNCs displayed a homogeneous and well-dispersed distribution, and the particle size of AuNCs ranged from 1.5 to 4.0 nm with an average diameter of 2.49 ± 0.30 nm. The results indicated that AuNCs induced lysis of the Fusobaterium nucleatum membrane potential with consequential cell wall integrity damage . Different studies demonstrated that the shape of nanoparticles is another critical parameter with respect to antibacterial activity . In their study, Acharya et al., through FE-SEM images, showed structural damage to bacterial cell walls upon treatment with spherical silver nanoparticles (AgNP-sp) but not with rod-shaped silver nanoparticles (AgNR) . In another study, the same authors observed the highest bacterial death when Gram-positive and Gram-negative bacteria were treated with nanospheres, compared to nanorods, nanotriangles, and nanohexagons . Hong X. et al. fabricated AgNPs having three different shapes, via a microwave-assisted method, and tested them against various bacteria species. The authors observed the weakest antibacterial activity in silver nanowires, compared to silver nanocubes and silver nanospheres, due to the lower amount of contact between silver nanowires and the bacterial membrane . Electrical potential, or “zeta potential”, is a key parameter in determining the stability and behavior of particles in a liquid medium and affects their antibacterial activity. Strong zeta potentials promote a strong interaction, causing membrane disruption, bacteria flocculation, and a reduction of viability . Zhang Y. et al. synthesized ultra-small gold nanoclusters (AuNCs) by a simple one-pot method, with a zeta potential of −38.8 mV. The results showed that the growth of Fusobacterium nucleatum was significantly hampered, and cell wall integrity was strongly damaged via a membrane depolarization mechanism. Thus, the zeta potential, which depends on the surface charge, is fundamental for the stability of nanoparticles in suspension and affects the initial adsorption of nanoparticles onto the cell membrane. Charge is crucial in bacterial resistance due to its influence on various cellular processes and interactions with antimicrobial agents. Cationic nanoparticles have been demonstrated to effectively depolarize and potentiate the bacterial membrane, facilitating the direct translocation of NPs to the cytosol region . Inside cells, cationic NPs interact by high affinity with DNA which is negatively charged, inducing conformational changes and disrupting bacterial replication . In their study, Haidari H. et al. tested newly synthesized, highly monodispersed, small (<3 nm) polycationic silver nanoclusters (pAgNCs) against a range of common Gram-negative and Gram-positive oral pathogens and against oral biofilm. The pAgNCs displayed greater antibacterial efficacy than similar-sized negatively charged silver nanoparticles or than ciprofloxacin . The synthesis of these pAgNCs allowed them to overcome the limits of anaerobic environment. Indeed, the dissolution of Ag + ions is an oxidation process, and the release rate is highly dependent on the presence of molecular oxygen. The pAgNCs also showed a strong capacity to significantly delay the development of bacterial resistance in anaerobic bacteria commonly found in dental infections, such as Fusobacterium nucleatum and Streptococcus sanguinis . Caudill E. et al. have observed an enhanced electrostatic attraction between positively charged gold nanoparticles functionalized with cationic branched polyethylenimine (bPEI-AuNPs) and Gram-positive bacteria due to the presence of negatively charged groups on the cell surface, such as teichoic acids . 3.2. Nanoparticles Target Efflux System Nanoparticles offer different approaches to overcoming efflux pumps as a defense mechanism adopted by bacteria: (1) creating a competition between substrate and antimicrobial agents; (2) downregulating the expression of efflux pumps; (3) blocking the efflux pumps by a designed molecular plug; (4) interacting directly with efflux pumps, by blocking their active sites or altering their conformation; and (5) indirectly modulating the expression or activity of efflux pumps . Sobhanipoor M.H. et al. observed a reduction in the efflux activity in enterococcal strains treated with zinc oxide nanoparticles (ZnONPs) . In the study conducted by Christena L.R. et al., copper nanoparticles (CuNPs) exhibited a significant efflux-inhibitory effect in wild-type strains of both Staphylococcus aureus and Pseudomonas aeruginosa and in drug-resistant mutant strains of Staphylococcus aureus. The authors proved that the antibacterial effect is due to Cu(II) ions released from the CuNPs more than the nanoparticle itself . Several metal oxide nanoparticles have been suggested for their combination with thiolated chitosan to tackle the multi-drug resistance problem in bacteria by blocking the efflux pump . Iqbal G. et al. exploited the physical–chemical characteristics of some metals to prepare thiolated-chitosan-coated-cobalt-doped zinc oxide nanoparticles (Co–ZnO), which were then able to induce inhibition of the efflux pump in drug-resistant mutant strains of Staphylococcus aureus . Efflux pumps, often the targets of the nanoparticles employed for combating biofilm-related infections, are characterized by a selective and orchestrated drug outgo . In a study, it was observed that ZnONPs inhibit biofilm formation and virulence factor production in Pseudomonas aeruginosa, by inducing the zinc cation efflux pump (Czc operon) at a genetic level and regulating key transcriptional factors (porin gene opdT and type III repressor ptrA), which directly blocks the efflux pump . 3.3. Nanoparticle-Induced Oxidative Stress Nanomaterials can function through various mechanisms that differ from traditional those of antibiotics, such as generation of ROS. Oxidative stress has been suggested as the main mechanism in the antimicrobial activity of bacterial cells exposed to GONPs. The high defect densities on the carbon structure act as active sites for oxygen molecules to adsorb onto the GO nanosheet surface. The adsorbed oxygen molecules become more reactive due to their interaction with the GO surface . These reactive oxygen molecules can then react with other molecules, including those in the cell membrane of bacteria, to generate highly reactive species like hydroxyl radicals. Perreault F. et al. observed a flattened and deformed bacterial shape, indicative of compromised cell integrity, in E. coli deposited on GO-coated surfaces. The same authors also observed that GO nanosheets can oxidize lipid molecules and glutathione (GSH) enzymes, demonstrating their intrinsic oxidative potential. This oxidative effect on glutathione was found to be size-dependent. Smaller GO sheets (0.01 μm 2 ) induce greater oxidation (71%) compared to larger ones (0.65 μm 2 , 49%) . Panda S. et al. described the molecular mechanism behind the antibacterial effect of GO nanosheet metal systems on Gram-negative bacteria E. coli . GO possesses abundant oxygen-containing functional groups like hydroxyl, epoxy, and carboxyl on its surface, which make GO an excellent electron acceptor. When GO comes into contact with a bacterial cell, it can draw electrons from the cell membrane. The electron transfer to GO triggers the production of ROS within the bacteria . Interestingly, it was found that cobalt as a dopant was able to increase the photodynamic and photothermal activity of Co–ZnO. Upon excitation in light, these nanoparticles were able to generate ROS with an increased quantum yield, and to generate heat, because of the magnetic nature, thus helping to kill more drug-resistant mutant strains of Staphylococcus aureus . Gurunathan S. et al. noted that the levels of ROS in GO and reduced graphene oxide (rGO)-treated Pseudomonas aeruginosa were 3.8-fold and 2.7-fold higher, respectively, compared to the level of ROS in control cells . After 24 h from treatment, the same authors observed DNA fragmentation in cells treated with GO, but not in bacteria treated with rGO, which suggests that cells require longer exposure to rGO to induce DNA fragmentation or that the mechanism of cell death caused by rGO could be different from that of GO after ROS production . In particular, the generation of oxidative stress is the main mechanism of metal NPS to the detriment of essential cellular components, such as proteins and nucleic acids. In a recent study, Wang Y. et al. synthesized stable gold nanoparticles (AuNCs) that are protected with 6-mercaptohexanoic acid (MHA). These nanoparticles consisted of 25 gold atoms and 18 thiolate ligands, formed through a one-pot reduction process converting gold (III) to gold (0). The results showed the antibacterial properties of these Au 25 NCs against both Gram-negative and Gram-positive bacteria after the disruption of antioxidant defense systems by the increase of intracellular ROS level and decrease of glutathione (GSH) . The same authors also observed that the increase in ROS production was greater in Gram-negative bacteria than in Gram-positive ones . Similarly, Zhang Y. et al. observed an increase of the level of ROS in Fusobacterium nucleatum after treatment with ultra-small gold nanoclusters (AuNCs) consisting of 25 gold atoms and 18 thiolate ligands formed through a one-pot reduction process . The generation of ROS is the mechanism underlying the antibacterial action of nanozymes (NZs), which refer to nanomaterials that have catalytic properties like natural enzymes. For this reason, recently, NZs have significantly advanced research in the fields of periodontics, specifically for the maintenance of periodontal health. In particular, NZs are employed for disrupting dental plaque, a complex biofilm composed of diverse bacterial species, which is notoriously recalcitrant to traditional antimicrobial agents. The exopolysaccharide (EPS) matrix acts as a protective shield encasing the microbial community, hindering penetration by agents and thus limiting their efficacy. Furthermore, the acidic microenvironment created within the biofilm promotes enamel demineralization, leading to dental caries. Nanohybrid systems are developed to exploit the acidic environment within oral biofilm to be activated to allow NZs to convert hydrogen peroxide (H 2 O 2 ), produced by bacteria, into free radicals, remaining within the three-dimensional structure of dental plaque. The combination of nanozymes and H 2 O 2 synergistically degrades EPS and eliminates bacteria-forming biofilm . Huang Y. et al. exploited the pathological (sugar-rich/acidic) conditions using a nanohybrid system to increase intrinsic H 2 O 2 production and trigger pH-dependent ROS generation for efficient biofilm virulence targeting. The nanohybrid contains glucose–oxidase (GOx) that catalyzes glucose present in biofilms to increase intrinsic H 2 O 2 , which is converted by iron oxide nanoparticles with peroxidase-like activity into ROS in acidic pH. The authors developed dextran-coated iron oxide nanozymes (Dex-IONP) that display strong catalytic peroxidase-like activity at acidic pH values, able to target biofilms with high specificity and to prevent severe damage without impacting surrounding oral tissues in vivo. This system selectively kills the pathogenic bacteria while sparing commensal bacteria. Furthermore, compared to chlorhexidine (positive-control), which disrupted oral microbiota diversity, the nanohybrid had significantly higher efficacy without affecting soft tissues and the oral–gastrointestinal microbiomes, while modulating dental-health-associated microbial activity in vivo . Gao L. et al. also developed catalytic nanoparticles (CAT-NPs) with peroxidase-like activity to target and disrupt plaque biofilm. CAT-NPs containing biocompatible Fe 3 O 4 were designed to generate free radicals by converting H 2 O 2 . Additionally, the generation of these powerful free radicals is specifically triggered by acidic conditions, which are prevalent within dental plaque . This targeted approach ensures that the action of CAT-NPs/H 2 O 2 remains limited, minimizing potential harm to healthy oral tissues. Wang Y. et al. designed iron-based nanozymes (IONzymes) and iron sulfide nanozymes (ISNzymes) with peroxidase-like activity catalyzing the generation of free radicals from H 2 O 2 , which is produced by S. gordonii , and the consequent radicals disrupting the biofilm matrix . 3.4. Combination of Therapies The properties of nanomaterials allow us to overcome some of the challenges of the strategies employed for combating periodontal diseases. Photodynamic therapy (PDT), an emerging approach that involves photosensitizers, light, and molecular oxygen, has shown promise for fighting periodontitis . However, PDT does not always lead to the desired therapeutic outcomes, since some photosensitizers have strong hydrophobic cores, making them difficult to absorb efficiently by periodontal pathogenic bacteria . To overcome this limitation, Li Z. et al. have developed a strategy to enhance the solubility and bacterial adsorption of a hydrophobic photosensitizer chlorin e6 (Ce6). They achieved this by conjugating Ce6 with a cationic cell-penetrating peptide known as TAT. To further optimize the treatment, the TAT–Ce6 conjugate was used to create self-assembled nanoparticles that efficiently load tinidazole (TDZ), a conventional antibiotic. The synergistic combination of PDT and antibiotic therapy, delivered through advanced nanoparticle technology, led to a great inhibitory effect against periodontal pathogens in vitro and in vivo . In another study, Sun X. et al. combined the photosensitizer chlorin e6 (Ce6), the fluorescent dye coumarin 6 (C6), and magnetic iron oxide nanoparticles (Fe 3 O 4 ). The co-loading of Ce6 and C6 enabled real-time antibacterial PDT monitoring by ratio emissions with the same wavelength. In contrast, Fe 3 O 4 with a magnetic field enabled the targeting of infection sites by eliminating multispecies oral biofilm . Cuprous oxide (Cu 2 O), a promising material for photodynamic therapy (PDT), suffers from a major drawback in the rapid recombination of electrons and holes. This limits its effectiveness in generating ROS. To address this issue, He Y. et al. have developed a novel nanosystem (Cu 2 O@rGO) via the in situ growth of Cu 2 O on reduced graphene oxide (rGO) sheets . rGO acts as an electron trap able to capture photoexcited electrons from Cu 2 O, preventing their recombination with holes. rGO facilitates the rapid transfer of electrons away from Cu 2 O. The incorporation of rGO significantly boosts the photocurrent of Cu 2 O@rGO, leading to a higher generation of charge carriers and improved electron–hole separation, demonstrating enhanced antibacterial rates against both E. coli and S. aureus . Periodontal disease often requires surgical intervention, and guided tissue regeneration (GTR) is a technique that uses membranes to guide tissue growth and healing. However, these membranes can be susceptible to bacterial infection, which can hinder the healing process and lead to complications. To address this issue, Seo N. et al. developed a new type of membrane using polycaprolactone (PCL), a biodegradable polymer, and zinc oxide (ZnO) nanoparticles. The PCL/ZnO membranes showed significantly reduced bacterial adhesion of a common oral bacteria such as Porphyromonas gingivalis , and, importantly, the ZnO nanoparticles did not negatively impact the growth of osteoblasts. This study suggests that PCL/ZnO membranes have the potential to improve the success of GTR procedures by preventing bacterial infection and promoting tissue regeneration . Nanofiber technology holds immense potential for developing innovative periodontal therapies. Researchers are exploring various approaches, including DCH-loaded nanofibers for inhibiting pathogens and promoting healing, PCL-loaded ZnO nanofibers for enhanced bone regeneration , and SPEEK-loaded nanofibers incorporating functionalized zirconia nanoparticles and curcumin for sustained drug release, improved cell viability, and wound healing . In another study, it was observed that the combination of two subsequent layers of nanoparticles characterized by osteoconductive (nHA) and antibacterial bimetallic nanocomposite (nZnO:Ag) inhibited bacterial growth without causing major toxic effects towards osteoblastic cells, and therefore may constitute a promising solution for GTR procedure . Lin J. et al. designed a novel hybrid hydrogel system that combines antibiotic therapy with photothermal treatment. The researchers developed a near-infrared light (NIR)-activated hybrid hydrogel that allows the release of antibacterial drugs and activation of photothermal treatment. Such antibiotics rapidly eliminate periodontal pathogens in the periodontal pocket, and the photothermal treatment maintains low bacterial retention after the drug therapy . Zhao C. et al. explored the use of carbon dots (CDs), specifically perilla-derived carbon nanodots (CNDs), as photosensitizers for antibacterial therapy, combined with near-infrared (NIR). These CNDs exhibited NIR absorption and emission, which is a critical feature for their role in PDT. NIR light is advantageous because it penetrates tissues more effectively than visible light, which could be useful in dental practice. Antibacterial activity measurement showed that the CNDs could inactivate 99.99% of S. aureus , E. faecalis , and methicillin-resistant S. aureus under 660 nm light irradiation for 5 min, while for the Gram-negative bacteria, the bactericidal efficiency was lower than 50%. Intracellular analysis showed that the antibacterial mechanism was due to the ROS generated on the surface of bacteria membranes upon NIR excitation, as well as the hydrophobic interaction between the hydrophobic groups and Gram-positive bacteria membranes . 3.5. Targeted Drug Delivery The use of nanotechnology-based carriers allowed the delivery of the drugs directly to the infection site, leading to a double advantage . Different studies focused on the synergistic activity of ZnONPs with more than 25 different antibiotics against S. aureus and E. coli have concluded that ZnONPs can enhance the antibacterial activities of penicillin, cephalosporins, aminoglycosides, glycopeptides, macrolides, lacosamide, gentamicin, clarithromycin, ofloxacin, and ceftriaxone and tetracycline . Gold nanoparticles have a stable surface for binding various antibiotic agents and may significantly increase the antibacterial effect of drugs by enhancing contact with bacterial cell walls . Antibacterial activity of vancomycin-capped gold nanoparticles against vancomycin-resistant Enterococcus and E. coli was 64 times greater than that of vancomycin alone . Nanoparticles can be designed to deliver antibiotics directly to infected cells, reducing the required dosage and minimizing side effects. Saeidi Z. et al. proposed a local dosage form, a thermosensitive gel containing clindamycin niosomes and solid lipid nanoparticles loaded with fluconazole (FLZ), for treating oral infections due to Candida albicans and Gram-positive bacteria. The local absorption of clindamycin and fluconazole directly in the oral cavity reduces the amount of them needed, and reduces the systemic side effects such as diarrhea, vomiting, stomach upset, and rush . The results of a recent study demonstrated that the anti-biofilm activity of CuNPs and ZnONPs combined with gentamicin in their lowest concentrations was more efficient than the antibiotic itself. In this study, SEM images showed that CuNPs and ZnONPs used in combination with gentamicin had the highest antibacterial activity when compared with treatment with CuNPs, ZnONPs, and antibiotics alone . The compound with a 50% reduction in ampicillin dosage has a bactericidal activity two times stronger than the antibiotic alone, according to Chamundeeswari M. et al., who created the chitosan-capped gold nanoparticles with ampicillin. MIC values were determined to be 27.4 μg/mL for E. coli and 20.6 μg/mL for S. aureus and K. mobilis when compared to free ampicillin . Ampicillin was employed by Chavan C. et al. as a reducing and capping agent to create gold nanoparticles that were ampicillin-coated. Amp-AuNPs build up on the bacterial surface and lead to the formation of membrane-level holes that allow them to enter the cell. Amp-AuNPs have demonstrated efficacy against ampicillin-resistant E. coli , and due to their strong adhesive qualities, they can prevent the development of biofilm . Payne J.N. et al. demonstrated that the conjugation of kanamycin with AuNPs (kan-AuNPs) led to dose-dependent activity with a broad spectrum and minimum inhibitory concentration, lower than antibiotic itself. In this study, the resulting CC50 strongly indicated that Kan-AuNPs would be efficacious in vivo . Existing data suggest that nanoparticles can be used to locally deliver drugs and to protect them from pH and enzymatic degradation in the periodontal lesion . Wang L. et al. designed a novel self-assembled, dual-responsive, and dual-drug-loading nanocarrier system, which included a hydrophobic lipid core formed by 1, 2-Distearoyl-sn-glycero-3-phosphoethanolamine-poly(ethylene glycol) (DSPE-PEG) loaded with alpha-lipoic acid (ALA), and a hydrophilic shell comprising a poly(amidoamine) dendrimer (PAMAM) that electrostatically adsorbed minocycline hydrochloride (Mino). This unique design allows the controlled release of antioxidant/ALA under lipase stimulation from periodontal pathogens and antimicrobial/Mino under the low pH of the inflammatory microenvironment . Another critical challenge for periodontitis therapy is thoroughly eliminating the dental-plaque biofilm, particularly penetrating the deep periodontal tissue without disturbing the commensal microflora of the oral cavity. Tong F. et al. constructed an Fe 3 O 4 magnetic nanoparticle-loading minocycline (FPM NPs) using a co-precipitation method. The multifunctional nanoparticles allowed for improved drug penetration and exhibited intense anti-biofilm activity by disrupting the integrity of the bacterial biofilm. The periodontal inflammation recovered well after FPM NP treatment in rat models, demonstrating good biocompatibility . Comorbidity often occurs in patients having periodontitis, thus representing a double challenge . Xu S. et al. proposed a novel approach for treating the complex relationship between periodontitis and hypertension, by combining multiple therapeutic strategies in a single delivery system. In this study, a controlled-release composite hydrogel approach was developed with dual antibacterial and anti-inflammatory activities. Specifically, a dual antibacterial hydrogel (CS-PA) has been fabricated by cross-linking chitosan (CS), which displays inherent antibacterial features, with a peptide (AMP)-modified polyethylene glycol (PEG). For long-term anti-inflammatory effects, curcumin has been incorporated into nanoparticles (CNP) and then placed in the hydrogel. CS-PA/CNP administered to the gingival sulcus in a mouse model of periodontitis complicated with hypertension showed a beneficial therapeutic impact on both periodontitis and hypertension at the same time . The interaction between the nanoparticle surface and the bacterial membrane is the starting point for the antimicrobial action of NPs. Their small size allows nanoparticles to penetrate biological barriers effectively. Metal-NPs easily bypass Gram-negative bacteria’s lipopolysaccharide (LPS), through their channel proteins . Yang Y. et al. demonstrated a size-dependent effect of gold nanoparticles (AuNPs) on bacterial LPS in promoting neutrophil uptake. Smaller (10 nm) AuNPs promoted the response of neutrophils more than larger (40 and 100 nm) AuNPs . It is well known that smaller nanoparticles have a larger surface-area-to-volume ratio. This means that more of the nanoparticle’s surface is exposed to the surrounding environment . Results from various studies have showed how the size of GO sheets impacts the effectiveness of their antimicrobial properties in different contexts . Larger GO sheets have a greater capacity to wrap around bacterial cells, isolating them from their environment and inhibiting their growth in suspension On the other hand, their interaction with bacteria differs when GO sheets are immobilized on a surface. In this case, smaller GO sheets may exhibit greater antimicrobial activity due to two main factors. Smaller sheets have a higher surface-to-volume ratio, leading to more potential interaction sites with bacterial cells. Smaller sheets have higher defect density, which can increase their reactivity and potentially enhance their antimicrobial properties through oxidative mechanisms . Another relevant toxicity factor responsible for size-dependent antibacterial activity is that smaller nanoparticles dissolve more quickly than larger ones. The rapid dissolution leads to a more rapid release of metal ions . Skandalis N. et al. observed through scanning electron microscopy (SEM) that smaller (40 nm) silver nanoparticles (AgNPs) induced stronger membrane damage in E. coli after 10 h than larger ones (58 nm) . Recently, Zhang Y. et al. proposed the ultra-small gold nanoclusters (AuNCs) composed of 25 gold atoms and 18 thiolate ligands. High-resolution transmission electron microscopy (TEM) showed that AuNCs displayed a homogeneous and well-dispersed distribution, and the particle size of AuNCs ranged from 1.5 to 4.0 nm with an average diameter of 2.49 ± 0.30 nm. The results indicated that AuNCs induced lysis of the Fusobaterium nucleatum membrane potential with consequential cell wall integrity damage . Different studies demonstrated that the shape of nanoparticles is another critical parameter with respect to antibacterial activity . In their study, Acharya et al., through FE-SEM images, showed structural damage to bacterial cell walls upon treatment with spherical silver nanoparticles (AgNP-sp) but not with rod-shaped silver nanoparticles (AgNR) . In another study, the same authors observed the highest bacterial death when Gram-positive and Gram-negative bacteria were treated with nanospheres, compared to nanorods, nanotriangles, and nanohexagons . Hong X. et al. fabricated AgNPs having three different shapes, via a microwave-assisted method, and tested them against various bacteria species. The authors observed the weakest antibacterial activity in silver nanowires, compared to silver nanocubes and silver nanospheres, due to the lower amount of contact between silver nanowires and the bacterial membrane . Electrical potential, or “zeta potential”, is a key parameter in determining the stability and behavior of particles in a liquid medium and affects their antibacterial activity. Strong zeta potentials promote a strong interaction, causing membrane disruption, bacteria flocculation, and a reduction of viability . Zhang Y. et al. synthesized ultra-small gold nanoclusters (AuNCs) by a simple one-pot method, with a zeta potential of −38.8 mV. The results showed that the growth of Fusobacterium nucleatum was significantly hampered, and cell wall integrity was strongly damaged via a membrane depolarization mechanism. Thus, the zeta potential, which depends on the surface charge, is fundamental for the stability of nanoparticles in suspension and affects the initial adsorption of nanoparticles onto the cell membrane. Charge is crucial in bacterial resistance due to its influence on various cellular processes and interactions with antimicrobial agents. Cationic nanoparticles have been demonstrated to effectively depolarize and potentiate the bacterial membrane, facilitating the direct translocation of NPs to the cytosol region . Inside cells, cationic NPs interact by high affinity with DNA which is negatively charged, inducing conformational changes and disrupting bacterial replication . In their study, Haidari H. et al. tested newly synthesized, highly monodispersed, small (<3 nm) polycationic silver nanoclusters (pAgNCs) against a range of common Gram-negative and Gram-positive oral pathogens and against oral biofilm. The pAgNCs displayed greater antibacterial efficacy than similar-sized negatively charged silver nanoparticles or than ciprofloxacin . The synthesis of these pAgNCs allowed them to overcome the limits of anaerobic environment. Indeed, the dissolution of Ag + ions is an oxidation process, and the release rate is highly dependent on the presence of molecular oxygen. The pAgNCs also showed a strong capacity to significantly delay the development of bacterial resistance in anaerobic bacteria commonly found in dental infections, such as Fusobacterium nucleatum and Streptococcus sanguinis . Caudill E. et al. have observed an enhanced electrostatic attraction between positively charged gold nanoparticles functionalized with cationic branched polyethylenimine (bPEI-AuNPs) and Gram-positive bacteria due to the presence of negatively charged groups on the cell surface, such as teichoic acids . Nanoparticles offer different approaches to overcoming efflux pumps as a defense mechanism adopted by bacteria: (1) creating a competition between substrate and antimicrobial agents; (2) downregulating the expression of efflux pumps; (3) blocking the efflux pumps by a designed molecular plug; (4) interacting directly with efflux pumps, by blocking their active sites or altering their conformation; and (5) indirectly modulating the expression or activity of efflux pumps . Sobhanipoor M.H. et al. observed a reduction in the efflux activity in enterococcal strains treated with zinc oxide nanoparticles (ZnONPs) . In the study conducted by Christena L.R. et al., copper nanoparticles (CuNPs) exhibited a significant efflux-inhibitory effect in wild-type strains of both Staphylococcus aureus and Pseudomonas aeruginosa and in drug-resistant mutant strains of Staphylococcus aureus. The authors proved that the antibacterial effect is due to Cu(II) ions released from the CuNPs more than the nanoparticle itself . Several metal oxide nanoparticles have been suggested for their combination with thiolated chitosan to tackle the multi-drug resistance problem in bacteria by blocking the efflux pump . Iqbal G. et al. exploited the physical–chemical characteristics of some metals to prepare thiolated-chitosan-coated-cobalt-doped zinc oxide nanoparticles (Co–ZnO), which were then able to induce inhibition of the efflux pump in drug-resistant mutant strains of Staphylococcus aureus . Efflux pumps, often the targets of the nanoparticles employed for combating biofilm-related infections, are characterized by a selective and orchestrated drug outgo . In a study, it was observed that ZnONPs inhibit biofilm formation and virulence factor production in Pseudomonas aeruginosa, by inducing the zinc cation efflux pump (Czc operon) at a genetic level and regulating key transcriptional factors (porin gene opdT and type III repressor ptrA), which directly blocks the efflux pump . Nanomaterials can function through various mechanisms that differ from traditional those of antibiotics, such as generation of ROS. Oxidative stress has been suggested as the main mechanism in the antimicrobial activity of bacterial cells exposed to GONPs. The high defect densities on the carbon structure act as active sites for oxygen molecules to adsorb onto the GO nanosheet surface. The adsorbed oxygen molecules become more reactive due to their interaction with the GO surface . These reactive oxygen molecules can then react with other molecules, including those in the cell membrane of bacteria, to generate highly reactive species like hydroxyl radicals. Perreault F. et al. observed a flattened and deformed bacterial shape, indicative of compromised cell integrity, in E. coli deposited on GO-coated surfaces. The same authors also observed that GO nanosheets can oxidize lipid molecules and glutathione (GSH) enzymes, demonstrating their intrinsic oxidative potential. This oxidative effect on glutathione was found to be size-dependent. Smaller GO sheets (0.01 μm 2 ) induce greater oxidation (71%) compared to larger ones (0.65 μm 2 , 49%) . Panda S. et al. described the molecular mechanism behind the antibacterial effect of GO nanosheet metal systems on Gram-negative bacteria E. coli . GO possesses abundant oxygen-containing functional groups like hydroxyl, epoxy, and carboxyl on its surface, which make GO an excellent electron acceptor. When GO comes into contact with a bacterial cell, it can draw electrons from the cell membrane. The electron transfer to GO triggers the production of ROS within the bacteria . Interestingly, it was found that cobalt as a dopant was able to increase the photodynamic and photothermal activity of Co–ZnO. Upon excitation in light, these nanoparticles were able to generate ROS with an increased quantum yield, and to generate heat, because of the magnetic nature, thus helping to kill more drug-resistant mutant strains of Staphylococcus aureus . Gurunathan S. et al. noted that the levels of ROS in GO and reduced graphene oxide (rGO)-treated Pseudomonas aeruginosa were 3.8-fold and 2.7-fold higher, respectively, compared to the level of ROS in control cells . After 24 h from treatment, the same authors observed DNA fragmentation in cells treated with GO, but not in bacteria treated with rGO, which suggests that cells require longer exposure to rGO to induce DNA fragmentation or that the mechanism of cell death caused by rGO could be different from that of GO after ROS production . In particular, the generation of oxidative stress is the main mechanism of metal NPS to the detriment of essential cellular components, such as proteins and nucleic acids. In a recent study, Wang Y. et al. synthesized stable gold nanoparticles (AuNCs) that are protected with 6-mercaptohexanoic acid (MHA). These nanoparticles consisted of 25 gold atoms and 18 thiolate ligands, formed through a one-pot reduction process converting gold (III) to gold (0). The results showed the antibacterial properties of these Au 25 NCs against both Gram-negative and Gram-positive bacteria after the disruption of antioxidant defense systems by the increase of intracellular ROS level and decrease of glutathione (GSH) . The same authors also observed that the increase in ROS production was greater in Gram-negative bacteria than in Gram-positive ones . Similarly, Zhang Y. et al. observed an increase of the level of ROS in Fusobacterium nucleatum after treatment with ultra-small gold nanoclusters (AuNCs) consisting of 25 gold atoms and 18 thiolate ligands formed through a one-pot reduction process . The generation of ROS is the mechanism underlying the antibacterial action of nanozymes (NZs), which refer to nanomaterials that have catalytic properties like natural enzymes. For this reason, recently, NZs have significantly advanced research in the fields of periodontics, specifically for the maintenance of periodontal health. In particular, NZs are employed for disrupting dental plaque, a complex biofilm composed of diverse bacterial species, which is notoriously recalcitrant to traditional antimicrobial agents. The exopolysaccharide (EPS) matrix acts as a protective shield encasing the microbial community, hindering penetration by agents and thus limiting their efficacy. Furthermore, the acidic microenvironment created within the biofilm promotes enamel demineralization, leading to dental caries. Nanohybrid systems are developed to exploit the acidic environment within oral biofilm to be activated to allow NZs to convert hydrogen peroxide (H 2 O 2 ), produced by bacteria, into free radicals, remaining within the three-dimensional structure of dental plaque. The combination of nanozymes and H 2 O 2 synergistically degrades EPS and eliminates bacteria-forming biofilm . Huang Y. et al. exploited the pathological (sugar-rich/acidic) conditions using a nanohybrid system to increase intrinsic H 2 O 2 production and trigger pH-dependent ROS generation for efficient biofilm virulence targeting. The nanohybrid contains glucose–oxidase (GOx) that catalyzes glucose present in biofilms to increase intrinsic H 2 O 2 , which is converted by iron oxide nanoparticles with peroxidase-like activity into ROS in acidic pH. The authors developed dextran-coated iron oxide nanozymes (Dex-IONP) that display strong catalytic peroxidase-like activity at acidic pH values, able to target biofilms with high specificity and to prevent severe damage without impacting surrounding oral tissues in vivo. This system selectively kills the pathogenic bacteria while sparing commensal bacteria. Furthermore, compared to chlorhexidine (positive-control), which disrupted oral microbiota diversity, the nanohybrid had significantly higher efficacy without affecting soft tissues and the oral–gastrointestinal microbiomes, while modulating dental-health-associated microbial activity in vivo . Gao L. et al. also developed catalytic nanoparticles (CAT-NPs) with peroxidase-like activity to target and disrupt plaque biofilm. CAT-NPs containing biocompatible Fe 3 O 4 were designed to generate free radicals by converting H 2 O 2 . Additionally, the generation of these powerful free radicals is specifically triggered by acidic conditions, which are prevalent within dental plaque . This targeted approach ensures that the action of CAT-NPs/H 2 O 2 remains limited, minimizing potential harm to healthy oral tissues. Wang Y. et al. designed iron-based nanozymes (IONzymes) and iron sulfide nanozymes (ISNzymes) with peroxidase-like activity catalyzing the generation of free radicals from H 2 O 2 , which is produced by S. gordonii , and the consequent radicals disrupting the biofilm matrix . The properties of nanomaterials allow us to overcome some of the challenges of the strategies employed for combating periodontal diseases. Photodynamic therapy (PDT), an emerging approach that involves photosensitizers, light, and molecular oxygen, has shown promise for fighting periodontitis . However, PDT does not always lead to the desired therapeutic outcomes, since some photosensitizers have strong hydrophobic cores, making them difficult to absorb efficiently by periodontal pathogenic bacteria . To overcome this limitation, Li Z. et al. have developed a strategy to enhance the solubility and bacterial adsorption of a hydrophobic photosensitizer chlorin e6 (Ce6). They achieved this by conjugating Ce6 with a cationic cell-penetrating peptide known as TAT. To further optimize the treatment, the TAT–Ce6 conjugate was used to create self-assembled nanoparticles that efficiently load tinidazole (TDZ), a conventional antibiotic. The synergistic combination of PDT and antibiotic therapy, delivered through advanced nanoparticle technology, led to a great inhibitory effect against periodontal pathogens in vitro and in vivo . In another study, Sun X. et al. combined the photosensitizer chlorin e6 (Ce6), the fluorescent dye coumarin 6 (C6), and magnetic iron oxide nanoparticles (Fe 3 O 4 ). The co-loading of Ce6 and C6 enabled real-time antibacterial PDT monitoring by ratio emissions with the same wavelength. In contrast, Fe 3 O 4 with a magnetic field enabled the targeting of infection sites by eliminating multispecies oral biofilm . Cuprous oxide (Cu 2 O), a promising material for photodynamic therapy (PDT), suffers from a major drawback in the rapid recombination of electrons and holes. This limits its effectiveness in generating ROS. To address this issue, He Y. et al. have developed a novel nanosystem (Cu 2 O@rGO) via the in situ growth of Cu 2 O on reduced graphene oxide (rGO) sheets . rGO acts as an electron trap able to capture photoexcited electrons from Cu 2 O, preventing their recombination with holes. rGO facilitates the rapid transfer of electrons away from Cu 2 O. The incorporation of rGO significantly boosts the photocurrent of Cu 2 O@rGO, leading to a higher generation of charge carriers and improved electron–hole separation, demonstrating enhanced antibacterial rates against both E. coli and S. aureus . Periodontal disease often requires surgical intervention, and guided tissue regeneration (GTR) is a technique that uses membranes to guide tissue growth and healing. However, these membranes can be susceptible to bacterial infection, which can hinder the healing process and lead to complications. To address this issue, Seo N. et al. developed a new type of membrane using polycaprolactone (PCL), a biodegradable polymer, and zinc oxide (ZnO) nanoparticles. The PCL/ZnO membranes showed significantly reduced bacterial adhesion of a common oral bacteria such as Porphyromonas gingivalis , and, importantly, the ZnO nanoparticles did not negatively impact the growth of osteoblasts. This study suggests that PCL/ZnO membranes have the potential to improve the success of GTR procedures by preventing bacterial infection and promoting tissue regeneration . Nanofiber technology holds immense potential for developing innovative periodontal therapies. Researchers are exploring various approaches, including DCH-loaded nanofibers for inhibiting pathogens and promoting healing, PCL-loaded ZnO nanofibers for enhanced bone regeneration , and SPEEK-loaded nanofibers incorporating functionalized zirconia nanoparticles and curcumin for sustained drug release, improved cell viability, and wound healing . In another study, it was observed that the combination of two subsequent layers of nanoparticles characterized by osteoconductive (nHA) and antibacterial bimetallic nanocomposite (nZnO:Ag) inhibited bacterial growth without causing major toxic effects towards osteoblastic cells, and therefore may constitute a promising solution for GTR procedure . Lin J. et al. designed a novel hybrid hydrogel system that combines antibiotic therapy with photothermal treatment. The researchers developed a near-infrared light (NIR)-activated hybrid hydrogel that allows the release of antibacterial drugs and activation of photothermal treatment. Such antibiotics rapidly eliminate periodontal pathogens in the periodontal pocket, and the photothermal treatment maintains low bacterial retention after the drug therapy . Zhao C. et al. explored the use of carbon dots (CDs), specifically perilla-derived carbon nanodots (CNDs), as photosensitizers for antibacterial therapy, combined with near-infrared (NIR). These CNDs exhibited NIR absorption and emission, which is a critical feature for their role in PDT. NIR light is advantageous because it penetrates tissues more effectively than visible light, which could be useful in dental practice. Antibacterial activity measurement showed that the CNDs could inactivate 99.99% of S. aureus , E. faecalis , and methicillin-resistant S. aureus under 660 nm light irradiation for 5 min, while for the Gram-negative bacteria, the bactericidal efficiency was lower than 50%. Intracellular analysis showed that the antibacterial mechanism was due to the ROS generated on the surface of bacteria membranes upon NIR excitation, as well as the hydrophobic interaction between the hydrophobic groups and Gram-positive bacteria membranes . The use of nanotechnology-based carriers allowed the delivery of the drugs directly to the infection site, leading to a double advantage . Different studies focused on the synergistic activity of ZnONPs with more than 25 different antibiotics against S. aureus and E. coli have concluded that ZnONPs can enhance the antibacterial activities of penicillin, cephalosporins, aminoglycosides, glycopeptides, macrolides, lacosamide, gentamicin, clarithromycin, ofloxacin, and ceftriaxone and tetracycline . Gold nanoparticles have a stable surface for binding various antibiotic agents and may significantly increase the antibacterial effect of drugs by enhancing contact with bacterial cell walls . Antibacterial activity of vancomycin-capped gold nanoparticles against vancomycin-resistant Enterococcus and E. coli was 64 times greater than that of vancomycin alone . Nanoparticles can be designed to deliver antibiotics directly to infected cells, reducing the required dosage and minimizing side effects. Saeidi Z. et al. proposed a local dosage form, a thermosensitive gel containing clindamycin niosomes and solid lipid nanoparticles loaded with fluconazole (FLZ), for treating oral infections due to Candida albicans and Gram-positive bacteria. The local absorption of clindamycin and fluconazole directly in the oral cavity reduces the amount of them needed, and reduces the systemic side effects such as diarrhea, vomiting, stomach upset, and rush . The results of a recent study demonstrated that the anti-biofilm activity of CuNPs and ZnONPs combined with gentamicin in their lowest concentrations was more efficient than the antibiotic itself. In this study, SEM images showed that CuNPs and ZnONPs used in combination with gentamicin had the highest antibacterial activity when compared with treatment with CuNPs, ZnONPs, and antibiotics alone . The compound with a 50% reduction in ampicillin dosage has a bactericidal activity two times stronger than the antibiotic alone, according to Chamundeeswari M. et al., who created the chitosan-capped gold nanoparticles with ampicillin. MIC values were determined to be 27.4 μg/mL for E. coli and 20.6 μg/mL for S. aureus and K. mobilis when compared to free ampicillin . Ampicillin was employed by Chavan C. et al. as a reducing and capping agent to create gold nanoparticles that were ampicillin-coated. Amp-AuNPs build up on the bacterial surface and lead to the formation of membrane-level holes that allow them to enter the cell. Amp-AuNPs have demonstrated efficacy against ampicillin-resistant E. coli , and due to their strong adhesive qualities, they can prevent the development of biofilm . Payne J.N. et al. demonstrated that the conjugation of kanamycin with AuNPs (kan-AuNPs) led to dose-dependent activity with a broad spectrum and minimum inhibitory concentration, lower than antibiotic itself. In this study, the resulting CC50 strongly indicated that Kan-AuNPs would be efficacious in vivo . Existing data suggest that nanoparticles can be used to locally deliver drugs and to protect them from pH and enzymatic degradation in the periodontal lesion . Wang L. et al. designed a novel self-assembled, dual-responsive, and dual-drug-loading nanocarrier system, which included a hydrophobic lipid core formed by 1, 2-Distearoyl-sn-glycero-3-phosphoethanolamine-poly(ethylene glycol) (DSPE-PEG) loaded with alpha-lipoic acid (ALA), and a hydrophilic shell comprising a poly(amidoamine) dendrimer (PAMAM) that electrostatically adsorbed minocycline hydrochloride (Mino). This unique design allows the controlled release of antioxidant/ALA under lipase stimulation from periodontal pathogens and antimicrobial/Mino under the low pH of the inflammatory microenvironment . Another critical challenge for periodontitis therapy is thoroughly eliminating the dental-plaque biofilm, particularly penetrating the deep periodontal tissue without disturbing the commensal microflora of the oral cavity. Tong F. et al. constructed an Fe 3 O 4 magnetic nanoparticle-loading minocycline (FPM NPs) using a co-precipitation method. The multifunctional nanoparticles allowed for improved drug penetration and exhibited intense anti-biofilm activity by disrupting the integrity of the bacterial biofilm. The periodontal inflammation recovered well after FPM NP treatment in rat models, demonstrating good biocompatibility . Comorbidity often occurs in patients having periodontitis, thus representing a double challenge . Xu S. et al. proposed a novel approach for treating the complex relationship between periodontitis and hypertension, by combining multiple therapeutic strategies in a single delivery system. In this study, a controlled-release composite hydrogel approach was developed with dual antibacterial and anti-inflammatory activities. Specifically, a dual antibacterial hydrogel (CS-PA) has been fabricated by cross-linking chitosan (CS), which displays inherent antibacterial features, with a peptide (AMP)-modified polyethylene glycol (PEG). For long-term anti-inflammatory effects, curcumin has been incorporated into nanoparticles (CNP) and then placed in the hydrogel. CS-PA/CNP administered to the gingival sulcus in a mouse model of periodontitis complicated with hypertension showed a beneficial therapeutic impact on both periodontitis and hypertension at the same time . Inflammation plays a critical role in the progression of periodontitis, leading to the degradation of the supportive tissues surrounding the teeth. Early detection and effective management of inflammation are essential for achieving the best possible patient outcomes. However, current treatment methods often fall short, highlighting the need for innovative approaches that integrate traditional therapies with advanced technologies . Nanoparticle-based drug delivery systems offer a promising solution by providing precise, targeted treatment for inflammatory diseases. These systems have several advantages, including high drug-loading capacity, controllable sustained release, and the ability to cross physiological barriers. Specifically, nanoparticles can modulate the immune response and reduce inflammation by various mechanisms, contributing to the preservation of periodontal tissue . They interact with immune cells, such as macrophages and T cells, influencing their behavior and function to mitigate excessive inflammation and promote tissue repair. It was reported that certain nanoparticles with antioxidant properties can neutralize harmful free radicals generated during inflammation, further protecting periodontal tissues from damage . It was also suggested that nanoparticles are applied to restore a vital process in the cells as autophagy . Recently, gold nanoparticles (AuNPs) have been applied to rescue the osteogenic potential of PDLSCs by restoring the inflammation-compromised autophagy–lysosome system . 4.1. Immunomodulatory Action In periodontitis, a complex interplay of immune cells, primarily macrophages, neutrophils, T cells, B cells, dendritic cells, and osteoclasts, contributes to both the initiation and progression of inflammation and tissue destruction. Therapeutic strategies that modulate the immune response, such as reprogramming macrophages from a pro-inflammatory M1 state to a healing M2 state, could help reduce inflammation and promote tissue regeneration, offering potential treatments for periodontitis without relying on antibiotics . Shi J. et al. created a resveratrol-loaded liposomal system (Lipo-RSV) to improve the delivery and effectiveness of resveratrol in treating periodontitis. The liposomal formulation enhances the stability and bioavailability of resveratrol, improving therapeutic outcomes. Lipo-RSV was found to regulate macrophages in the immune microenvironment of periodontitis. This nanosystem was able to shift macrophages from a pro-inflammatory M1 phenotype to an anti-inflammatory M2-like phenotype, promoting healing. This process was mediated through the activation of p-STAT3 and the downregulation of p-STAT1 . Wang Y. et al. introduced quercetin, a known antioxidant and anti-inflammatory compound, onto nano-octahedral ceria, creating a quercetin-loaded ceria nanocomposite (CeO 2 @QU). This nanocomposite synergistically regulates immune responses in periodontal disease by both inhibiting M1 polarization and promoting M2 polarization. The nanocomposite was found to effectively modulate macrophage polarization in in vitro models by increasing the M2/M1 ratio of macrophages after lipopolysaccharide (LPS) stimulation, which mimics bacterial-induced inflammation . Recently, exosome-based drug delivery systems have also been studied . For example, it was observed that the exosome–curcumin complex enhanced the anti-inflammatory effect of curcumin, and due to the natural vehicle (exosome), did not induce immune response, which avoided subsequent side effects . The study found that in patients with periodontitis, there is a destabilized Th17/Treg balance in the peripheral blood, characterized by upregulated Th17 cells and downregulated Treg cells. This imbalance contributes to the chronic inflammation seen in periodontitis . Recently, the exosomes which are nanosized (30–120 nm) were explored for their capacity to alleviate the inflammatory microenvironment by influencing the Th17/Treg balance. Zhang Y. et al. investigated the therapeutic potential of mesenchymal-stem-cell-derived exosomes (MSC-exos) in treating periodontitis, particularly focusing on the benefits of using a 3D culture system to improve exosome production and efficacy (3D-exos). This nanosystem offered a more effective treatment approach for periodontitis by restoring the Th17/Treg balance through the miR-1246/Nfat5 axis . Zheng Y. et al. extracted exosomes from PDLSCs stimulated by Porphyromonas gingivalis lipopolysaccharide (LPS). The exosomes from these LPS-stimulated PDLSCs were found to influence CD4+ T cells by modulating the Th17/Treg balance through the miR-155-5p/SIRT1 pathway . 4.2. Regulating Pro-/Anti-Inflammatory Environment ROS production is a key biological process in macrophages, particularly when they are activated to kill phagocytosed microorganisms. This process is part of the immune response to infections, where ROS help to destroy pathogens. However, excessive ROS production can be detrimental. When macrophages produce too many ROS, it can push them toward the pro-inflammatory M1 phenotype. This M1 phenotype is associated with the release of inflammatory cytokines and the promotion of chronic inflammation, which can worsen conditions like periodontitis. Thus, while ROS are important for immune defense, their overproduction can contribute to the escalation of inflammatory diseases by driving macrophages to a harmful M1 state . Shi J. et al. observed that a resveratrol-loaded liposomal system (Lipo-RSV) was effective in scavenging reactive oxygen species (ROS) and inhibiting the NF-κB signaling pathway and inflammasomes. As a result, Lipo-RSV reduced the levels of pro-inflammatory cytokines such as IL-1β, IL-6, and TNF-α, which are typically elevated in periodontitis . In animal models of periodontal inflammation, quercetin-loaded ceria nanocomposite (CeO 2 @QU) significantly reduced pro-inflammatory cytokines such as TNF-α and IL-1β and increased anti-inflammatory cytokines such as IL-10, leading to improved therapeutic outcomes . Xuan L. et al. investigated the use of nanoparticle-encapsulated baicalein (Nano-BE) to modulate the pro-inflammatory response in gingival epithelial cells (hGECs). In this study, baicalein (BE), known for its anti-inflammatory properties, was encapsulated in amine-modified mesoporous silica nanoparticles (MSNs), enhancing their solubility and bioavailability. The study showed that this encapsulation improved drug loading efficiency and allowed for a sustained release of the drugs for up to 216 h. Nano-BE treatment significantly downregulated the IL-1β-induced expression of pro-inflammatory cytokines, particularly IL-6 and IL-8, suggesting that Nano-BE effectively modulates the inflammatory response in gingival epithelial cells . Bao X. et al. explored the use of polydopamine nanoparticles (PDA-NPs) as efficient scavengers for ROS in the treatment of oxidative-stress-induced periodontal disease. In this murine periodontitis model, PDA-NPs demonstrated robust antioxidative effects, efficiently scavenging multiple types of ROS, such as superoxide anion (O 2 − ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH·) . Non-steroidal anti-inflammatory drug (NSAID)-loaded nanoparticles represent a significant advancement in drug delivery systems, offering a targeted and controlled approach to administering non-steroidal anti-inflammatory drugs. By encapsulating NSAIDs within tiny particles, typically less than 100 nanometers in size, these formulations can enhance drug bioavailability, reduce systemic side effects, and improve therapeutic efficacy. These nanoparticles can be engineered with various materials, including polymers, lipids, and inorganic compounds, each offering unique advantages in terms of drug loading capacity, release kinetics, and biocompatibility . Once administered, these nanoparticles can target specific tissues or cells. They also can be designed to release the drug in a controlled manner, either as a sustained-release or as an on-demand release, optimizing therapeutic efficacy and minimizing adverse effects. They are used for rheumatoid arthritis, osteoarthritis, skin inflammation, and dental pain. In the oral district, NSAID-loaded nanoparticles can be used to reduce pain and swelling after dental procedures . Osorio M.T. et al. developed doxycycline-doped nanoparticles to obtain an anti-inflammatory response on periodontal-ligament-derived stem cells (PDLSCs), which play an irreplaceable role in regeneration of periodontal tissues and maintaining their homeostasis . In this study, NPs were found to be biocompatible and non-toxic for PDLSCs, promoting the differentiation of PDLSCs into osteoblasts and cementoblasts, which are essential for bone and tissue regeneration. They effectively reduced the inflammatory response of PDLSCs, particularly when exposed to inflammatory mediators like IL-1β. Therefore, these doxycycline-loaded NPs could be a promising therapeutic approach for periodontitis thanks to their sustained release of the antibiotic, enhancing tissue regeneration, reducing inflammation, and improving periodontal health . In periodontitis, a complex interplay of immune cells, primarily macrophages, neutrophils, T cells, B cells, dendritic cells, and osteoclasts, contributes to both the initiation and progression of inflammation and tissue destruction. Therapeutic strategies that modulate the immune response, such as reprogramming macrophages from a pro-inflammatory M1 state to a healing M2 state, could help reduce inflammation and promote tissue regeneration, offering potential treatments for periodontitis without relying on antibiotics . Shi J. et al. created a resveratrol-loaded liposomal system (Lipo-RSV) to improve the delivery and effectiveness of resveratrol in treating periodontitis. The liposomal formulation enhances the stability and bioavailability of resveratrol, improving therapeutic outcomes. Lipo-RSV was found to regulate macrophages in the immune microenvironment of periodontitis. This nanosystem was able to shift macrophages from a pro-inflammatory M1 phenotype to an anti-inflammatory M2-like phenotype, promoting healing. This process was mediated through the activation of p-STAT3 and the downregulation of p-STAT1 . Wang Y. et al. introduced quercetin, a known antioxidant and anti-inflammatory compound, onto nano-octahedral ceria, creating a quercetin-loaded ceria nanocomposite (CeO 2 @QU). This nanocomposite synergistically regulates immune responses in periodontal disease by both inhibiting M1 polarization and promoting M2 polarization. The nanocomposite was found to effectively modulate macrophage polarization in in vitro models by increasing the M2/M1 ratio of macrophages after lipopolysaccharide (LPS) stimulation, which mimics bacterial-induced inflammation . Recently, exosome-based drug delivery systems have also been studied . For example, it was observed that the exosome–curcumin complex enhanced the anti-inflammatory effect of curcumin, and due to the natural vehicle (exosome), did not induce immune response, which avoided subsequent side effects . The study found that in patients with periodontitis, there is a destabilized Th17/Treg balance in the peripheral blood, characterized by upregulated Th17 cells and downregulated Treg cells. This imbalance contributes to the chronic inflammation seen in periodontitis . Recently, the exosomes which are nanosized (30–120 nm) were explored for their capacity to alleviate the inflammatory microenvironment by influencing the Th17/Treg balance. Zhang Y. et al. investigated the therapeutic potential of mesenchymal-stem-cell-derived exosomes (MSC-exos) in treating periodontitis, particularly focusing on the benefits of using a 3D culture system to improve exosome production and efficacy (3D-exos). This nanosystem offered a more effective treatment approach for periodontitis by restoring the Th17/Treg balance through the miR-1246/Nfat5 axis . Zheng Y. et al. extracted exosomes from PDLSCs stimulated by Porphyromonas gingivalis lipopolysaccharide (LPS). The exosomes from these LPS-stimulated PDLSCs were found to influence CD4+ T cells by modulating the Th17/Treg balance through the miR-155-5p/SIRT1 pathway . ROS production is a key biological process in macrophages, particularly when they are activated to kill phagocytosed microorganisms. This process is part of the immune response to infections, where ROS help to destroy pathogens. However, excessive ROS production can be detrimental. When macrophages produce too many ROS, it can push them toward the pro-inflammatory M1 phenotype. This M1 phenotype is associated with the release of inflammatory cytokines and the promotion of chronic inflammation, which can worsen conditions like periodontitis. Thus, while ROS are important for immune defense, their overproduction can contribute to the escalation of inflammatory diseases by driving macrophages to a harmful M1 state . Shi J. et al. observed that a resveratrol-loaded liposomal system (Lipo-RSV) was effective in scavenging reactive oxygen species (ROS) and inhibiting the NF-κB signaling pathway and inflammasomes. As a result, Lipo-RSV reduced the levels of pro-inflammatory cytokines such as IL-1β, IL-6, and TNF-α, which are typically elevated in periodontitis . In animal models of periodontal inflammation, quercetin-loaded ceria nanocomposite (CeO 2 @QU) significantly reduced pro-inflammatory cytokines such as TNF-α and IL-1β and increased anti-inflammatory cytokines such as IL-10, leading to improved therapeutic outcomes . Xuan L. et al. investigated the use of nanoparticle-encapsulated baicalein (Nano-BE) to modulate the pro-inflammatory response in gingival epithelial cells (hGECs). In this study, baicalein (BE), known for its anti-inflammatory properties, was encapsulated in amine-modified mesoporous silica nanoparticles (MSNs), enhancing their solubility and bioavailability. The study showed that this encapsulation improved drug loading efficiency and allowed for a sustained release of the drugs for up to 216 h. Nano-BE treatment significantly downregulated the IL-1β-induced expression of pro-inflammatory cytokines, particularly IL-6 and IL-8, suggesting that Nano-BE effectively modulates the inflammatory response in gingival epithelial cells . Bao X. et al. explored the use of polydopamine nanoparticles (PDA-NPs) as efficient scavengers for ROS in the treatment of oxidative-stress-induced periodontal disease. In this murine periodontitis model, PDA-NPs demonstrated robust antioxidative effects, efficiently scavenging multiple types of ROS, such as superoxide anion (O 2 − ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH·) . Non-steroidal anti-inflammatory drug (NSAID)-loaded nanoparticles represent a significant advancement in drug delivery systems, offering a targeted and controlled approach to administering non-steroidal anti-inflammatory drugs. By encapsulating NSAIDs within tiny particles, typically less than 100 nanometers in size, these formulations can enhance drug bioavailability, reduce systemic side effects, and improve therapeutic efficacy. These nanoparticles can be engineered with various materials, including polymers, lipids, and inorganic compounds, each offering unique advantages in terms of drug loading capacity, release kinetics, and biocompatibility . Once administered, these nanoparticles can target specific tissues or cells. They also can be designed to release the drug in a controlled manner, either as a sustained-release or as an on-demand release, optimizing therapeutic efficacy and minimizing adverse effects. They are used for rheumatoid arthritis, osteoarthritis, skin inflammation, and dental pain. In the oral district, NSAID-loaded nanoparticles can be used to reduce pain and swelling after dental procedures . Osorio M.T. et al. developed doxycycline-doped nanoparticles to obtain an anti-inflammatory response on periodontal-ligament-derived stem cells (PDLSCs), which play an irreplaceable role in regeneration of periodontal tissues and maintaining their homeostasis . In this study, NPs were found to be biocompatible and non-toxic for PDLSCs, promoting the differentiation of PDLSCs into osteoblasts and cementoblasts, which are essential for bone and tissue regeneration. They effectively reduced the inflammatory response of PDLSCs, particularly when exposed to inflammatory mediators like IL-1β. Therefore, these doxycycline-loaded NPs could be a promising therapeutic approach for periodontitis thanks to their sustained release of the antibiotic, enhancing tissue regeneration, reducing inflammation, and improving periodontal health . Traditional methods for periodontal tissue reconstruction, such as allogeneic and autologous grafts, often present limitations, including limited tissue availability, donor site morbidity, immune rejection, and invasiveness . To address these challenges, nanotechnology offers a promising alternative, as it enables the manipulation of materials at the nanoscale, enhancing their properties and functionalities. Various nanomaterials, such as nanoparticles, nanocapsules, and nanofibers, have shown potential for oral tissue reconstruction and restoration . For instance, calcium phosphate nanoparticles loaded with chlorhexidine, coated with carboxymethylcellulose to enhance bio-adhesion, have demonstrated antimicrobial and mineralizing effects . Poly(D,L-lactide-co-glycolide acid) (PLGA)-based nanoparticles incorporating lovastatin and tetracycline facilitate the sequential release of drugs, providing a dual effect of infection control and bone regeneration . Natural polymers like chitosan have further been utilized in core–shell PLGA–chitosan nanospheres encapsulating drugs like simvastatin and doxycycline, promoting periodontium repair in infected areas and enhancing osteogenesis in bone defects . Innovations with materials like Laponite (LAP)-embedded polycaprolactone (PCL) further advance bone regeneration by promoting cellular viability, differentiation, and vascularization . Moreover, nano-hydroxyapatite (nHA) remains a highly researched biomaterial for alveolar bone regeneration due to its promising results in supporting bone tissue repair. 5.1. Nanohydroxyapatite (nHA) Hydroxyapatite (HA) is a widely studied biomaterial in medicine and dentistry due to its excellent biocompatibility and its natural occurrence in hard tissues like bone and teeth. As a significant source of calcium and phosphate, HA is particularly useful for alveolar bone regeneration. However, traditional HA often shows poor mechanical properties due to its porous structure. Nanohydroxyapatite (nHA), on the other hand, displays improved properties. With its smaller particle size, nHA exhibits increased solubility, higher surface energy, and enhanced biocompatibility. This larger surface area also contributes to its excellent bioactivity compared to larger HA crystals . nHA is considered a promising scaffolding material for bone regeneration due to its structural similarity to natural bone. Research suggests that nHA-based scaffolds, such as Gel–nHA, can promote tissue regeneration, making them suitable for endodontic applications . To produce nHA, various synthesis techniques exist, including co-precipitation, wet precipitation, hydrothermal, mechanochemical, hydrolysis, solid-state, and sol-gel methods. Among these, wet chemical precipitation is the most widely used due to its simplicity, reproducibility, and environmentally friendly nature, producing only water as a byproduct . For large-scale and rapid synthesis, microwave hydrothermal methods combined with ultrasonic atomization precipitation offer advantages. This approach produces nHA powders with homogeneous size distribution and excellent dispersibility . In addition, nHA coatings on dental implants, such as stainless steel and titanium, enhance bone integration and new bone formation, leading to improved bone-to-implant contact (BIC) . de Oliveira P.G.F.P. et al. have demonstrated that nHA coatings stimulate cellular activity, including osteoblast and osteoclast activity, and promote bone regeneration . Yamada M. et al. explored the effect of nanopolymorphic crystalline hydroxyapatite (HA) coating on microroughened titanium implants for improving bone–implant integration. The HA coating, created using flame spray and low-temperature calcination, was found to increase surface area and enhance the osteoconductivity of the implants. In a rat model, HA-coated implants showed significant improvements in bone–implant integration, with higher bone contact (50 μm) and volume near the implant surface and reduced soft tissue interference . Additionally, combining nHA with polyacrylamide-based hydrogels (PAAM) enhances post-extraction preservation by supporting osteoblast infiltration, cell adhesion, and fluid retention, while providing strength, degradability, and low cytotoxicity . In oral surgery, nano-polymorphic crystalline HA on titanium surfaces via flame spray and low-temperature calcination boosts bone–implant integration, with improved osteoconductivity localized to the microenvironment . Nano-crystalline hydroxyapatite also binds bone, encouraging osteoblast activity and aiding in bone healing, as evidenced in clinical trials where it supported periodontal tissue regeneration . In studies of nano-HA paste, human periodontal ligament (PDL) cells proliferated in response to the paste, with underlying mechanisms involving the activation of the epidermal growth factor receptor (EGFR) and downstream signaling pathways contributing to periodontal tissue regeneration . 5.2. Nanostructured Scaffolds for Tissue Engineering Tissue engineering integrates principles from biology and engineering for restoring, maintaining, or improving tissue function. A common approach involves seeding cells onto a biomaterial scaffold to create functional tissue in vitro, which can then be implanted into a patient . Tissue engineering scaffolds must possess specific properties to facilitate tissue regeneration. They should promote cell adhesion and proliferation, provide a porous structure, degrade at a controlled rate, and provide mechanical support. To mimic the natural extracellular matrix (ECM), scaffolds are often designed with nanofibrous structures, porous architectures, and biomimetic material . 5.3. Nanofibrous Scaffolds Nanofiber scaffolds have emerged as a promising technology in the field of tissue engineering, particularly for craniofacial regeneration. These scaffolds, fabricated through techniques like electrospinning, self-assembly, and phase separation, offer a versatile platform for creating biomimetic environments that mimic the extracellular matrix (ECM). By tailoring the fiber diameter, pore size, and mechanical properties, researchers can design scaffolds that support cell adhesion, proliferation, and differentiation . To enhance the bioactivity of nanofiber scaffolds, researchers have incorporated various bioactive molecules, including growth factors, cytokines, and extracellular matrix proteins. These molecules can stimulate cell proliferation, differentiation, and migration, leading to improved tissue regeneration. Additionally, the incorporation of mineral phases, such as hydroxyapatite, can further enhance the osteoconductive properties of the scaffolds . The electrospun nanofibrous membranes of synthetic polymers are widely used in the biomedical field, and polycaprolactone (PCL), polylactic acid (PLA), and poly(lactic acid-co-glycolic acid) (PLGA) are commonly utilized in the treatment of oral diseases . In particular, PCL, a biodegradable and biocompatible polymer, offers excellent permeability, flexibility, and ease of processing. Importantly, it does not generate acidic byproducts during degradation, maintaining the stability of the oral environment . Electrospun PLA nanofibrous membranes are a popular treatment for periodontitis. PLGA nanofibers prepared via electrospinning have gained wide adoption in the treatment of oral diseases, including periodontitis, pulp disease, and tissue regeneration. PLGA-drug-loaded barrier membranes can provide controlled drug release and promote bone tissue growth. Ma et al. observed successful prevention of decreased alveolar ridge height and increased bone growth using PLGA-loaded minocycline (MINO) nanofibers . A novel biodegradable, antibacterial, and osteoconductive electrospun PLGA/PCL membrane was tested as an ideal osteogenic scaffold by Qian Y. et al. . It was structured with serial layers of electrospun chlorhexidine-doped PLGA/PCL, PLGA/PCL, and β-tricalcium phosphate-doped PLGA/PCL. The results suggested that it had superior properties such as higher strength, better cell adhesion, greater osteoconductive properties compared to a single-layer membrane, and antibacterial properties . Recent studies have demonstrated the potential of nanofiber scaffolds for craniofacial regeneration. Xu et al. investigated the use of nanosilicate-functionalized polycaprolactone (PCL/LAP) nanofibrous membranes for periodontal regeneration. The incorporation of Laponite (LAP) enhanced cell proliferation and osteogenic differentiation and modulated the immune responses of periodontal ligament cells (PDLCs), leading to improved bone formation and periodontal attachment in rat models . Kuchler-Bopp et al. explored the use of polycaprolactone (PCL) scaffolds functionalized with poly(lactic-co-glycolic acid) (PLGA) nanoparticles loaded with cyclosporine A (CsA) to promote innervation of bioengineered teeth. The CsA-loaded PLGA nanoparticles, synthesized using a microfluidic method, enhanced innervation within the dental pulp without affecting overall tooth development . Furthermore, studies on nanohydroxyapatite/chitosan/gelatin (nHA/CG) scaffolds seeded with human periodontal ligament stem cells (hPDLSCs) have shown promising results in large jawbone defect regeneration in minipigs. The hPDLSCs adhered well to the nHA/CG scaffolds and significantly enhanced bone formation, suggesting the potential of this approach for future clinical applications . Hydroxyapatite (HA) is a widely studied biomaterial in medicine and dentistry due to its excellent biocompatibility and its natural occurrence in hard tissues like bone and teeth. As a significant source of calcium and phosphate, HA is particularly useful for alveolar bone regeneration. However, traditional HA often shows poor mechanical properties due to its porous structure. Nanohydroxyapatite (nHA), on the other hand, displays improved properties. With its smaller particle size, nHA exhibits increased solubility, higher surface energy, and enhanced biocompatibility. This larger surface area also contributes to its excellent bioactivity compared to larger HA crystals . nHA is considered a promising scaffolding material for bone regeneration due to its structural similarity to natural bone. Research suggests that nHA-based scaffolds, such as Gel–nHA, can promote tissue regeneration, making them suitable for endodontic applications . To produce nHA, various synthesis techniques exist, including co-precipitation, wet precipitation, hydrothermal, mechanochemical, hydrolysis, solid-state, and sol-gel methods. Among these, wet chemical precipitation is the most widely used due to its simplicity, reproducibility, and environmentally friendly nature, producing only water as a byproduct . For large-scale and rapid synthesis, microwave hydrothermal methods combined with ultrasonic atomization precipitation offer advantages. This approach produces nHA powders with homogeneous size distribution and excellent dispersibility . In addition, nHA coatings on dental implants, such as stainless steel and titanium, enhance bone integration and new bone formation, leading to improved bone-to-implant contact (BIC) . de Oliveira P.G.F.P. et al. have demonstrated that nHA coatings stimulate cellular activity, including osteoblast and osteoclast activity, and promote bone regeneration . Yamada M. et al. explored the effect of nanopolymorphic crystalline hydroxyapatite (HA) coating on microroughened titanium implants for improving bone–implant integration. The HA coating, created using flame spray and low-temperature calcination, was found to increase surface area and enhance the osteoconductivity of the implants. In a rat model, HA-coated implants showed significant improvements in bone–implant integration, with higher bone contact (50 μm) and volume near the implant surface and reduced soft tissue interference . Additionally, combining nHA with polyacrylamide-based hydrogels (PAAM) enhances post-extraction preservation by supporting osteoblast infiltration, cell adhesion, and fluid retention, while providing strength, degradability, and low cytotoxicity . In oral surgery, nano-polymorphic crystalline HA on titanium surfaces via flame spray and low-temperature calcination boosts bone–implant integration, with improved osteoconductivity localized to the microenvironment . Nano-crystalline hydroxyapatite also binds bone, encouraging osteoblast activity and aiding in bone healing, as evidenced in clinical trials where it supported periodontal tissue regeneration . In studies of nano-HA paste, human periodontal ligament (PDL) cells proliferated in response to the paste, with underlying mechanisms involving the activation of the epidermal growth factor receptor (EGFR) and downstream signaling pathways contributing to periodontal tissue regeneration . Tissue engineering integrates principles from biology and engineering for restoring, maintaining, or improving tissue function. A common approach involves seeding cells onto a biomaterial scaffold to create functional tissue in vitro, which can then be implanted into a patient . Tissue engineering scaffolds must possess specific properties to facilitate tissue regeneration. They should promote cell adhesion and proliferation, provide a porous structure, degrade at a controlled rate, and provide mechanical support. To mimic the natural extracellular matrix (ECM), scaffolds are often designed with nanofibrous structures, porous architectures, and biomimetic material . Nanofiber scaffolds have emerged as a promising technology in the field of tissue engineering, particularly for craniofacial regeneration. These scaffolds, fabricated through techniques like electrospinning, self-assembly, and phase separation, offer a versatile platform for creating biomimetic environments that mimic the extracellular matrix (ECM). By tailoring the fiber diameter, pore size, and mechanical properties, researchers can design scaffolds that support cell adhesion, proliferation, and differentiation . To enhance the bioactivity of nanofiber scaffolds, researchers have incorporated various bioactive molecules, including growth factors, cytokines, and extracellular matrix proteins. These molecules can stimulate cell proliferation, differentiation, and migration, leading to improved tissue regeneration. Additionally, the incorporation of mineral phases, such as hydroxyapatite, can further enhance the osteoconductive properties of the scaffolds . The electrospun nanofibrous membranes of synthetic polymers are widely used in the biomedical field, and polycaprolactone (PCL), polylactic acid (PLA), and poly(lactic acid-co-glycolic acid) (PLGA) are commonly utilized in the treatment of oral diseases . In particular, PCL, a biodegradable and biocompatible polymer, offers excellent permeability, flexibility, and ease of processing. Importantly, it does not generate acidic byproducts during degradation, maintaining the stability of the oral environment . Electrospun PLA nanofibrous membranes are a popular treatment for periodontitis. PLGA nanofibers prepared via electrospinning have gained wide adoption in the treatment of oral diseases, including periodontitis, pulp disease, and tissue regeneration. PLGA-drug-loaded barrier membranes can provide controlled drug release and promote bone tissue growth. Ma et al. observed successful prevention of decreased alveolar ridge height and increased bone growth using PLGA-loaded minocycline (MINO) nanofibers . A novel biodegradable, antibacterial, and osteoconductive electrospun PLGA/PCL membrane was tested as an ideal osteogenic scaffold by Qian Y. et al. . It was structured with serial layers of electrospun chlorhexidine-doped PLGA/PCL, PLGA/PCL, and β-tricalcium phosphate-doped PLGA/PCL. The results suggested that it had superior properties such as higher strength, better cell adhesion, greater osteoconductive properties compared to a single-layer membrane, and antibacterial properties . Recent studies have demonstrated the potential of nanofiber scaffolds for craniofacial regeneration. Xu et al. investigated the use of nanosilicate-functionalized polycaprolactone (PCL/LAP) nanofibrous membranes for periodontal regeneration. The incorporation of Laponite (LAP) enhanced cell proliferation and osteogenic differentiation and modulated the immune responses of periodontal ligament cells (PDLCs), leading to improved bone formation and periodontal attachment in rat models . Kuchler-Bopp et al. explored the use of polycaprolactone (PCL) scaffolds functionalized with poly(lactic-co-glycolic acid) (PLGA) nanoparticles loaded with cyclosporine A (CsA) to promote innervation of bioengineered teeth. The CsA-loaded PLGA nanoparticles, synthesized using a microfluidic method, enhanced innervation within the dental pulp without affecting overall tooth development . Furthermore, studies on nanohydroxyapatite/chitosan/gelatin (nHA/CG) scaffolds seeded with human periodontal ligament stem cells (hPDLSCs) have shown promising results in large jawbone defect regeneration in minipigs. The hPDLSCs adhered well to the nHA/CG scaffolds and significantly enhanced bone formation, suggesting the potential of this approach for future clinical applications . The biocompatibility of NPs is one of the most critical characteristics that determine whether nanoplatforms are suitable for biomedical applications. For NPs to be effectively used in medical treatments, they must interact safely with biological systems without causing harmful effects, such as toxicity, inflammation, or immune system activation. Their biocompatibility ensures that they can be safely introduced into the body, either for drug delivery, imaging, or tissue engineering, without adverse reactions or long-term harm. This includes factors such as non-toxicity, minimal immune response, biodegradability, and the ability to integrate with the body’s tissues without causing rejection or damage. Consequently, thorough assessment of the biocompatibility of NPs is essential for their successful translation from the laboratory to clinical use in medical and dental fields. There are several surface modification methods available for NPs aimed at optimizing their biocompatibility and enhancing their performance in biomedical applications. These modifications are crucial for ensuring that NPs can interact safely with biological systems, minimizing toxicity and improving their ability to target specific cells or tissues . Polyethylene glycol (PEG), also known as PEGylation, is one of the most widely used coatings for NPs. It creates a hydrophilic, biocompatible layer that helps prevent immune recognition, reduces opsonization (the process by which particles are marked for clearance by the immune system), and increases the stability of the NPs in biological environments . Functionalizing the surface of NPs with biomolecules such as peptides, antibodies, or proteins can enhance their ability to target specific cells or tissues . To reduce the toxicity of NPs, dextran is widely used as a surface modification agent . Dextran is a complex branched polysaccharide derived from glucose, known for its biocompatibility, biodegradability, and low immunogenicity. It is commonly employed to modify the surface of iron oxide nanoparticles (Fe 3 O 4 NPs), providing several benefits in biomedical applications . The findings in the work of Huang Y. et al. suggest the biocompatibility of dextran-coated iron oxide nanozymes (Dex-IONP-GOx) as antimicrobial treatment, supporting its safe application for biomedical purposes. Histopathological analysis of gingival tissues as well as liver and kidney showed no visible signs of harmful effects, such as proliferative changes, vascularization issues, necrosis, or acute inflammatory responses, after treatment . Natural biopolymers like chitosan, alginate, and hyaluronic acid can be also used to coat or modify the surface of NPs. Polymers such as poly(lactic-co-glycolic acid) (PLGA) are often used for this purpose, as they degrade safely into non-toxic byproducts. These polymers are biocompatible, biodegradable, and less likely to induce an immune response. They can also enhance the controlled release of drugs, improve the stability of NPs, and aid in the targeting of specific tissues . Poly(α-hydroxy-esters)-chitosan core–shell nanospheres adopted in different studies not only provide a hydrophilic extracellular matrix–like surface, promoting enhanced cell affinity, but they also enable the sequential release of multiple drugs . This unique combination of properties makes them highly suitable for applications in drug delivery systems, where controlled and targeted release of therapeutic agents is essential. The hydrophilic surface mimics the natural extracellular matrix, facilitating better interaction with cells, while the core–shell structure allows for the encapsulation and sustained release of different drugs in a controlled manner, improving treatment efficacy and minimizing side effects. Hydrogels can be incorporated onto the surface of nanoparticles to enhance their biocompatibility by providing a cushioning layer that mimics natural tissue environments. This is especially useful for applications in regenerative medicine and tissue engineering, where the nanoparticles must integrate with biological tissues without causing an inflammatory response. Fang J. et al. developed a strong, tough, osteoconductive hydrogel through a simple one-step micellar copolymerization of acrylamide and urethacrylate dextran (Dex-U), followed by the in situ mineralization of hydroxyapatite (HAp) nanocrystals. The biocompatibility of the HAp-PADH hydrogel is attributed to its hydrophilic surface, the use of biocompatible materials like dextran and hydroxyapatite, and its ability to support cellular functions such as osteoblast proliferation and differentiation . These properties not only reduce the risk of toxicity or immune rejection but also enhance its potential for use in bone tissue engineering and other regenerative medical applications. Overall, while surface modifications are widely used to enhance the biocompatibility, stability, and functionality of NPs, the exact mechanisms by which these coatings interact with biological systems are still not fully understood. Improving the specific interactions of various surface coatings, as well as refining their delivery mechanisms, continues to be an ongoing area of research. Advances in this field are crucial for optimizing the targeted delivery, controlled release, and overall safety of nanomaterials in biomedical applications. One of the primary challenges with nanomaterials is ensuring biocompatibility. Some nanoparticles may trigger immune responses or cytotoxicity, leading to inflammation, irritation, or damage to healthy tissue . More importantly, the behavior and properties of metals change significantly from the micro to the nano size, and therefore their potential toxicity at the nanoscale must be deeply investigated . Addressing these issues requires comprehensive surface modifications and functionalization with biocompatible molecules to enhance their safety profile. The long-term effects of nanomaterials in the oral cavity are still not fully understood, making it important to evaluate their safety before widespread use. Nanomaterials used in periodontal treatments must be stable in the oral environment, where factors like saliva, enzymes, and the presence of biofilms can degrade materials or alter their properties. To address these challenges, researchers are developing nanomaterials with specific properties to enhance stability in the oral environment. For example, hydrophilic coatings can help resist enzymatic degradation, while hydrophobic coatings might resist bacterial attachment . Furthermore, the delivery of nanomaterials across the biofilm or infected plaque, where bacteria reside in a protective matrix, can be difficult. Nanomaterials must be able to penetrate these biofilms and interact directly with the bacteria-deriving biofilm, leaving unaltered health tissues. Inorganic nanoparticles (NPs) with enzyme-like properties called nanozymes are designed to mimic the activity of natural enzymes without relying on complex biological machinery. Nanozymes are promising in overcoming the above limitations, by degrading the key components of oral biofilm, which weakens the biofilm structure and facilitates the penetration of therapeutic agents . They catalyze reactions through mechanisms like oxidation, reduction, hydrolysis, and metal ion coordination, based on their size, shape, surface charge, and functionalization . For instance, Zhang et al. developed DNA nanozymes for detecting dental bacteria, while Huang et al. created iron oxide nanozymes to selectively inhibit biofilm deriving from Streptococcus mutans . Achieving precise targeting of nanomaterials to the infected tissues is essential for maximizing the effectiveness of treatment. While nanocarriers can deliver drugs or bioactive molecules directly to the site of infection, challenges remain in controlling the release profiles and preventing premature degradation or drug leakage. For instance, some nanoparticles may lose their functional properties over time, reducing their effectiveness in sustained drug release or antibacterial action. Ensuring the long-lasting effectiveness of nanomaterials is essential for providing therapeutic benefits over the course of treatment . The use of nanomaterials in medical applications is still a relatively new field, and regulatory frameworks for their approval in periodontitis treatments are still developing. Regulatory bodies, such as the FDA or EMA, require rigorous testing to ensure that these materials are both safe and effective for human use . There are also ethical concerns surrounding the potential unintended consequences of using nanomaterials, especially regarding long-term health impacts and the environmental consequences of widespread nanomaterial use. Nanomaterials can persist in soil, water, and air, potentially influencing ecosystems; however, the green chemistry approach focusing on environmental concerns, efficiency, and economy seeks to reduce this risk . Future research should focus on long-term in vivo studies in animal models to evaluate the safety and efficacy of nanoparticle-based scaffolds, as well as exploring synergistic effects with other bioactive molecules or growth factors to further enhance bone regeneration therapies. For example, the use of stem cells in combination with nanomaterials raises several regulatory and ethical concerns. The safety and efficacy of stem-cell–nanomaterial combinations must be rigorously evaluated. Regulatory bodies, such as the FDA and EMA, require comprehensive preclinical and clinical testing to ensure that these treatments do not pose undue risks, such as immune reactions, tumor formation, or unintended biological interactions. The future of nanomaterial-based therapies for periodontitis lies in improving the targeted delivery of antibacterial agents, anti-inflammatory drugs, or growth factors directly to the sites of infection. Nanoparticles can be functionalized to selectively bind to specific bacterial strains or tissue receptors involved in inflammation. The development of smart nanomaterials capable of responding to changes in the local environment (such as pH, temperature, or enzyme activity) could offer on-demand drug release, ensuring that therapeutic agents are only released when needed and at the right concentration. Nanomaterials with inherent antibacterial properties, such as silver nanoparticles, copper oxide nanoparticles, or graphene-based materials, are being explored for their ability to kill bacteria directly. These materials can be incorporated into toothpastes, gels, or mouthwashes to provide localized, sustained antimicrobial activity. Advances in nanotechnology could lead to personalized periodontal treatments based on an individual’s specific bacterial profile. For example, nanomaterials could be tailored to target particular pathogens or inflammatory pathways specific to a patient’s condition, improving treatment efficacy and reducing side effects. Nanomaterial-based sensors embedded in dental implants or as part of wearable devices could enable real-time monitoring of oral health and bacterial activity. These sensors could be used to track early signs of infection, allowing for proactive interventions before periodontitis progresses. While significant progress has been made in the development of nanofiber scaffolds for craniofacial regeneration, several challenges remain. These include the need for improved control over fiber alignment and pore structure, the development of scalable manufacturing processes, and the long-term evaluation of biocompatibility and efficacy. Nanomedicine holds transformative potential for the future of dentistry and periodontal therapy, offering more effective, targeted, and minimally invasive treatment options. With the continued development of nanotechnology, it is expected to revolutionize the way dental professionals prevent, diagnose, and treat a range of oral conditions, especially periodontal diseases. However, their successful application in clinical settings will require addressing key challenges related to biocompatibility, targeted delivery, stability, and cost-effectiveness. While nanoparticles hold immense potential in revolutionizing dentistry through their diagnostic, therapeutic, and regenerative applications, addressing key challenges is essential for their broader adoption. Cost-effectiveness remains a critical concern, as the synthesis and functionalization of nanoparticles can be expensive, limiting their accessibility. However, nanozymes represent a good example of cost-effective nanomaterials. Indeed, nanozymes are cost-effective, stable, easy to synthesize, and efficient compared to natural enzymes. Additionally, achieving regulatory approval involves rigorous safety and efficacy evaluations, which can be time-consuming and complex. Overcoming these challenges will require collaborative efforts in research, innovation, and policymaking to ensure that nanoparticle-based solutions are not only effective but also affordable and widely available for clinical use. The future of nanomaterials in periodontitis treatment lies in the development of smart, responsive systems that provide personalized, localized, and non-invasive treatment options for patients, ultimately contributing to more effective and efficient management of periodontal disease. While research and clinical trials in humans are still in the early stages, there is growing evidence that nanomedicine can offer significant benefits for the treatment of periodontal diseases. To further advance the practical application of nanotechnology-based systems, it is essential to explore several key avenues of research. First, long-term clinical trials are needed to assess the safety, efficacy, and potential toxicity of these systems in real-world settings. While preclinical studies often show promising results, the translation of nanotechnology into clinical practice requires robust data from extended trials to evaluate long-term effects and to ensure consistent performance across different patient populations. Second, real-world testing of nanoparticle-based systems is essential to determine their behavior in complex biological environments, as laboratory conditions often differ from the dynamic conditions in human tissues. This includes understanding how surface coatings affect nanoparticle distribution, immune response, and clearance over time, as well as how they interact with other drugs or medical devices in clinical use. Lastly, personalized nanomedicine will benefit from deeper exploration into the customization of surface coatings for specific patient needs, considering factors such as genetic variability, disease stage, and tissue-specific targeting. Interdisciplinary collaboration among material scientists, biologists, and clinicians will be key to optimizing these systems for patient-specific therapies. By focusing on these future research directions—long-term clinical trials, real-world testing, and personalized approaches—the potential of nanotechnology-based systems can be more effectively realized, paving the way for their successful integration into mainstream medical treatments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.